R is a programming language and environment commonly used in statistical computing, data analytics and scientific research.
It is one of the most popular languages used by statisticians, data analysts, researchers and marketers to retrieve, clean, analyze, visualize and present data.
Due to its expressive syntax and easy-to-use interface, it has grown in popularity in recent years.
R-Programming.ppt it is based on R programming languageZoha681526
It's based on R programming language and this ppt contains very easy learning of R programming language anyone can easily learn this R programming language through this ppt as it has very easy explanation
The document provides an outline of topics covered in R including introduction, data types, data analysis techniques like regression and ANOVA, resources for R, probability distributions, programming concepts like loops and functions, and data manipulation techniques. R is a programming language and software environment for statistical analysis that allows data manipulation, calculation, and graphical visualization. Key features of R include its programming language, high-level functions for statistics and graphics, and ability to extend functionality through packages.
Data Science, Statistical Analysis and R... Learn what those mean, how they can help you find answers to your questions and complement the existing toolsets and processes you are currently using to make sense of data. We will explore R and the RStudio development environment, installing and using R packages, basic and essential data structures and data types, plotting graphics, manipulating data frames and how to connect R and SQL Server.
Advanced Data Analytics with R Programming.pptAnshika865276
R is a software environment for statistical analysis and graphics. It allows users to import, clean, analyze, and visualize data. Key features include importing data from various sources, conducting descriptive statistics and statistical modeling, and creating publication-quality graphs. R has a steep learning curve but is highly extensible and supports a wide range of statistical techniques through its base functionality and contributed packages.
R is a free programming language and software environment for statistical analysis and graphics. It contains functions for data manipulation, calculation, and graphical displays. Some key features of R include being free, running on multiple platforms, and having extensive statistical and graphical capabilities. Common object types in R include vectors, matrices, data frames, and lists. R also has packages that add additional functions.
This document provides an agenda for an R programming presentation. It includes an introduction to R, commonly used packages and datasets in R, basics of R like data structures and manipulation, looping concepts, data analysis techniques using dplyr and other packages, data visualization using ggplot2, and machine learning algorithms in R. Shortcuts for the R console and IDE are also listed.
This document provides an introduction to data analysis and graphics in R. It covers vectors and assignment, data types including logical, integer, numeric, character, factor, complex and raw. It also discusses data structures such as atomic vectors, matrices, arrays and lists. Finally, it discusses importing data into R from files such as .RData files, text files using read.table(), CSV files and Excel files.
R is a free software environment for statistical analysis and graphics. It allows importing, cleaning, analyzing, and visualizing data. Key features include its ability to read various data formats, perform statistical analyses and modeling, and produce publication-quality graphs. R has a steep learning curve but is highly extensible and supports a wide range of statistical techniques through its packages. This document provides an introduction to obtaining and installing R, performing basic tasks like importing data and help functions, and using R for descriptive statistics, statistical modeling, and multivariate analyses.
Slides on introduction to R by ArinBasu MDSonaCharles2
R is a free software environment for statistical analysis and graphics. It allows importing, cleaning, analyzing, and visualizing data. Key features include its ability to read various data formats, perform statistical analyses and modeling, and produce publication-quality graphs. R has a steep learning curve but is highly extensible and supports a wide range of statistical techniques through its packages. This document provides an introduction to obtaining and installing R, performing basic tasks like importing data and help functions, and using R for descriptive statistics, statistical modeling, and multivariate analyses.
R is a free software environment for statistical analysis and graphics. It allows importing, cleaning, analyzing, and visualizing data. Key features include its ability to handle many types of data, produce high-quality graphs, and implement a wide variety of statistical techniques like regression. R has a steep learning curve but a strong user community and implements advanced statistical methods. It can effectively store, manipulate, and summarize data.
R is a programming language and software environment for statistical analysis and graphical display of data. It is widely used among data scientists and researchers for developing statistical software and data analysis. Some key features of R include its large number of statistical and graphical techniques, ability to produce publications-quality plots, and availability of a vast collection of add-on packages. R also has disadvantages such as being an interpreted language and thus relatively slow, and having a difficult learning curve.
This document introduces the R programming language. It covers obtaining and installing R, reading and exporting data, and performing basic statistical analyses and econometrics. R can be used for statistical analysis, modeling, and data visualization. It has a steep learning curve but is free, open source software with a strong user community and implements many advanced statistical methods.
- R is a free software environment for statistical computing and graphics. It has an active user community and supports graphical capabilities.
- R can import and export data, perform data manipulation and summaries. It provides various plotting functions and control structures to control program flow.
- Debugging tools in R include traceback, debug, browser and trace which help identify and fix issues in functions.
This document provides an overview of the statistical programming language R. It discusses key R concepts like data types, vectors, matrices, data frames, lists, and functions. It also covers important R tools for data analysis like statistical functions, linear regression, multiple regression, and file input/output. The goal of R is to provide a large integrated collection of tools for data analysis and statistical computing.
This document summarizes a presentation given by Thomas Hütter on using R for data analysis and visualization. The presentation provided an overview of R's history and ecosystem, introduced basic data types and functions, and demonstrated connecting to a SQL Server database to extract and analyze sales data from a Dynamics Nav system. It showed visualizing the results with ggplot2 and creating interactive apps with the Shiny framework. The presentation emphasized that proper data understanding is important for reliable analysis and highlighted resources for learning more about R.
This document provides an introduction and overview of the R programming language and statistical software environment. It discusses R's open source development, interoperability with other languages, variety of statistical and numerical methods, and high quality visualization tools. It also introduces key R concepts like vectors, matrices, lists, data frames, functions, operators, subsetting, importing/exporting data, and gives examples of frequently used statistical and graphical functions. Finally, it provides an overview of the Bioconductor project for biological data analysis using R.
This document provides an overview of topics to be covered in R Programming including variables, data types, data import/export, logical statements, loops, functions, data plotting and visualization, and basic statistical functions and packages. It then goes on to introduce R, explaining that it is a programming language for statistical analysis and graphical display. It discusses why R is useful for data analysis and exploration due to its large collection of tools, ability to handle big data, and open source community support. The document also covers installing R and RStudio, defining variables, common data types like vectors, matrices, arrays, lists and data frames, and basic operations and control structures like if/else statements and loops.
S-PLUS is a commercial implementation of the S programming language sold by TIBCO Software Inc. It features object-oriented programming capabilities and advanced analytical algorithms. S-PLUS allows for data to be aggregated into different object types like vectors, matrices, lists, and data frames. These objects can then be manipulated using functions and operators to perform statistical analysis and create visualizations.
This document provides an overview of machine learning in R. It discusses R's capabilities for statistical analysis and visualization. It describes key R concepts like objects, data structures, plots, and packages. It explains how to import and work with data, perform basic statistics and machine learning algorithms like linear models, naive Bayes, and decision trees. The document serves as an introduction for using R for machine learning tasks.
This document provides an introduction to the R programming language. It discusses reasons for using R such as its free and open-source nature, wide range of analysis methods, and growing popularity. It also covers basic R concepts like data frames, metadata, packages, and functions. The document emphasizes that R allows outputs from functions to be reused as inputs for other functions, and discusses saving and loading workspaces, managing directories, and redirecting output and graphs.
This document provides an introduction and overview to learning R. It covers installing R and RStudio, basic data types and structures like vectors, matrices and data frames. It also discusses importing data, viewing and manipulating data through functions like filtering, binding and transforming. Finally, it discusses creating summary tables from data, joining datasets, and creating visualizations and plots in R using packages like ggplot2. The goal is to learn the basics of working with data in R, performing basic analysis and creating charts.
R is a free programming language and software environment for statistical analysis and graphics. It contains functions for data manipulation, calculation, and graphical displays. Some key features of R include being free, running on multiple platforms, and having extensive statistical and graphical capabilities. Common object types in R include vectors, matrices, data frames, and lists. R also has packages that add additional functions.
This document provides an agenda for an R programming presentation. It includes an introduction to R, commonly used packages and datasets in R, basics of R like data structures and manipulation, looping concepts, data analysis techniques using dplyr and other packages, data visualization using ggplot2, and machine learning algorithms in R. Shortcuts for the R console and IDE are also listed.
This document provides an introduction to data analysis and graphics in R. It covers vectors and assignment, data types including logical, integer, numeric, character, factor, complex and raw. It also discusses data structures such as atomic vectors, matrices, arrays and lists. Finally, it discusses importing data into R from files such as .RData files, text files using read.table(), CSV files and Excel files.
R is a free software environment for statistical analysis and graphics. It allows importing, cleaning, analyzing, and visualizing data. Key features include its ability to read various data formats, perform statistical analyses and modeling, and produce publication-quality graphs. R has a steep learning curve but is highly extensible and supports a wide range of statistical techniques through its packages. This document provides an introduction to obtaining and installing R, performing basic tasks like importing data and help functions, and using R for descriptive statistics, statistical modeling, and multivariate analyses.
Slides on introduction to R by ArinBasu MDSonaCharles2
R is a free software environment for statistical analysis and graphics. It allows importing, cleaning, analyzing, and visualizing data. Key features include its ability to read various data formats, perform statistical analyses and modeling, and produce publication-quality graphs. R has a steep learning curve but is highly extensible and supports a wide range of statistical techniques through its packages. This document provides an introduction to obtaining and installing R, performing basic tasks like importing data and help functions, and using R for descriptive statistics, statistical modeling, and multivariate analyses.
R is a free software environment for statistical analysis and graphics. It allows importing, cleaning, analyzing, and visualizing data. Key features include its ability to handle many types of data, produce high-quality graphs, and implement a wide variety of statistical techniques like regression. R has a steep learning curve but a strong user community and implements advanced statistical methods. It can effectively store, manipulate, and summarize data.
R is a programming language and software environment for statistical analysis and graphical display of data. It is widely used among data scientists and researchers for developing statistical software and data analysis. Some key features of R include its large number of statistical and graphical techniques, ability to produce publications-quality plots, and availability of a vast collection of add-on packages. R also has disadvantages such as being an interpreted language and thus relatively slow, and having a difficult learning curve.
This document introduces the R programming language. It covers obtaining and installing R, reading and exporting data, and performing basic statistical analyses and econometrics. R can be used for statistical analysis, modeling, and data visualization. It has a steep learning curve but is free, open source software with a strong user community and implements many advanced statistical methods.
- R is a free software environment for statistical computing and graphics. It has an active user community and supports graphical capabilities.
- R can import and export data, perform data manipulation and summaries. It provides various plotting functions and control structures to control program flow.
- Debugging tools in R include traceback, debug, browser and trace which help identify and fix issues in functions.
This document provides an overview of the statistical programming language R. It discusses key R concepts like data types, vectors, matrices, data frames, lists, and functions. It also covers important R tools for data analysis like statistical functions, linear regression, multiple regression, and file input/output. The goal of R is to provide a large integrated collection of tools for data analysis and statistical computing.
This document summarizes a presentation given by Thomas Hütter on using R for data analysis and visualization. The presentation provided an overview of R's history and ecosystem, introduced basic data types and functions, and demonstrated connecting to a SQL Server database to extract and analyze sales data from a Dynamics Nav system. It showed visualizing the results with ggplot2 and creating interactive apps with the Shiny framework. The presentation emphasized that proper data understanding is important for reliable analysis and highlighted resources for learning more about R.
This document provides an introduction and overview of the R programming language and statistical software environment. It discusses R's open source development, interoperability with other languages, variety of statistical and numerical methods, and high quality visualization tools. It also introduces key R concepts like vectors, matrices, lists, data frames, functions, operators, subsetting, importing/exporting data, and gives examples of frequently used statistical and graphical functions. Finally, it provides an overview of the Bioconductor project for biological data analysis using R.
This document provides an overview of topics to be covered in R Programming including variables, data types, data import/export, logical statements, loops, functions, data plotting and visualization, and basic statistical functions and packages. It then goes on to introduce R, explaining that it is a programming language for statistical analysis and graphical display. It discusses why R is useful for data analysis and exploration due to its large collection of tools, ability to handle big data, and open source community support. The document also covers installing R and RStudio, defining variables, common data types like vectors, matrices, arrays, lists and data frames, and basic operations and control structures like if/else statements and loops.
S-PLUS is a commercial implementation of the S programming language sold by TIBCO Software Inc. It features object-oriented programming capabilities and advanced analytical algorithms. S-PLUS allows for data to be aggregated into different object types like vectors, matrices, lists, and data frames. These objects can then be manipulated using functions and operators to perform statistical analysis and create visualizations.
This document provides an overview of machine learning in R. It discusses R's capabilities for statistical analysis and visualization. It describes key R concepts like objects, data structures, plots, and packages. It explains how to import and work with data, perform basic statistics and machine learning algorithms like linear models, naive Bayes, and decision trees. The document serves as an introduction for using R for machine learning tasks.
This document provides an introduction to the R programming language. It discusses reasons for using R such as its free and open-source nature, wide range of analysis methods, and growing popularity. It also covers basic R concepts like data frames, metadata, packages, and functions. The document emphasizes that R allows outputs from functions to be reused as inputs for other functions, and discusses saving and loading workspaces, managing directories, and redirecting output and graphs.
This document provides an introduction and overview to learning R. It covers installing R and RStudio, basic data types and structures like vectors, matrices and data frames. It also discusses importing data, viewing and manipulating data through functions like filtering, binding and transforming. Finally, it discusses creating summary tables from data, joining datasets, and creating visualizations and plots in R using packages like ggplot2. The goal is to learn the basics of working with data in R, performing basic analysis and creating charts.
UNIT-1-PPT-Introduction about Power System Operation and ControlSridhar191373
Power scenario in Indian grid – National and Regional load dispatching centers –requirements of good power system - necessity of voltage and frequency regulation – real power vs frequency and reactive power vs voltage control loops - system load variation, load curves and basic concepts of load dispatching - load forecasting - Basics of speed governing mechanisms and modeling - speed load characteristics - regulation of two generators in parallel.
Kevin Corke Spouse Revealed A Deep Dive Into His Private Life.pdfMedicoz Clinic
Kevin Corke, a respected American journalist known for his work with Fox News, has always kept his personal life away from the spotlight. Despite his public presence, details about his spouse remain mostly private. Fans have long speculated about his marital status, but Corke chooses to maintain a clear boundary between his professional and personal life. While he occasionally shares glimpses of his family on social media, he has not publicly disclosed his wife’s identity. This deep dive into his private life reveals a man who values discretion, keeping his loved ones shielded from media attention.
May 2025 - Top 10 Read Articles in Artificial Intelligence and Applications (...gerogepatton
The International Journal of Artificial Intelligence & Applications (IJAIA) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of the Artificial Intelligence & Applications (IJAIA). It is an international journal intended for professionals and researchers in all fields of AI for researchers, programmers, and software and hardware manufacturers. The journal also aims to publish new attempts in the form of special issues on emerging areas in Artificial Intelligence and applications.
This presentation outlines testing methods and equipment for evaluating gas-phase air filtration media using flat sheet samples, in accordance with ISO 10121 standards—specifically designed for assessing the performance of media used in general ventilation and indoor air quality applications.
"The Enigmas of the Riemann Hypothesis" by Julio ChaiJulio Chai
In the vast tapestry of the history of mathematics, where the brightest minds have woven with threads of logical reasoning and flash-es of intuition, the Riemann Hypothesis emerges as a mystery that chal-lenges the limits of human understanding. To grasp its origin and signif-icance, it is necessary to return to the dawn of a discipline that, like an incomplete map, sought to decipher the hidden patterns in numbers. This journey, comparable to an exploration into the unknown, takes us to a time when mathematicians were just beginning to glimpse order in the apparent chaos of prime numbers.
Centuries ago, when the ancient Greeks contemplated the stars and sought answers to the deepest questions in the sky, they also turned their attention to the mysteries of numbers. Pythagoras and his followers revered numbers as if they were divine entities, bearers of a universal harmony. Among them, prime numbers stood out as the cornerstones of an infinite cathedral—indivisible and enigmatic—hiding their ar-rangement beneath a veil of apparent randomness. Yet, their importance in building the edifice of number theory was already evident.
The Middle Ages, a period in which the light of knowledge flick-ered in rhythm with the storms of history, did not significantly advance this quest. It was the Renaissance that restored lost splendor to mathe-matical thought. In this context, great thinkers like Pierre de Fermat and Leonhard Euler took up the torch, illuminating the path toward a deeper understanding of prime numbers. Fermat, with his sharp intuition and ability to find patterns where others saw disorder, and Euler, whose overflowing genius connected number theory with other branches of mathematics, were the architects of a new era of exploration. Like build-ers designing a bridge over an unknown abyss, their contributions laid the groundwork for later discoveries.
This presentation provides a detailed overview of air filter testing equipment, including its types, working principles, and industrial applications. Learn about key performance indicators such as filtration efficiency, pressure drop, and particulate holding capacity. The slides highlight standard testing methods (e.g., ISO 16890, EN 1822, ASHRAE 52.2), equipment configurations (such as aerosol generators, particle counters, and test ducts), and the role of automation and data logging in modern systems. Ideal for engineers, quality assurance professionals, and researchers involved in HVAC, automotive, cleanroom, or industrial filtration systems.
May 2025: Top 10 Cited Articles in Software Engineering & Applications Intern...sebastianku31
The International Journal of Software Engineering & Applications (IJSEA) is a bi-monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of the Software Engineering & Applications. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on understanding Modern software engineering concepts & establishing new collaborations in these areas.
This presentation showcases a detailed catalogue of testing solutions aligned with ISO 4548-9, the international standard for evaluating the anti-drain valve performance in full-flow lubricating oil filters used in internal combustion engines.
Topics covered include:
This research presents a machine learning (ML) based model to estimate the axial strength of corroded RC columns reinforced with fiber-reinforced polymer (FRP) composites. Estimating the axial strength of corroded columns is complex due to the intricate interplay between corrosion and FRP reinforcement. To address this, a dataset of 102 samples from various literature sources was compiled. Subsequently, this dataset was employed to create and train the ML models. The parameters influencing axial strength included the geometry of the column, properties of the FRP material, degree of corrosion, and properties of the concrete. Considering the scarcity of reliable design guidelines for estimating the axial strength of RC columns considering corrosion effects, artificial neural network (ANN), Gaussian process regression (GPR), and support vector machine (SVM) techniques were employed. These techniques were used to predict the axial strength of corroded RC columns reinforced with FRP. When comparing the results of the proposed ML models with existing design guidelines, the ANN model demonstrated higher predictive accuracy. The ANN model achieved an R-value of 98.08% and an RMSE value of 132.69 kN which is the lowest among all other models. This model fills the existing gap in knowledge and provides a precise means of assessment. This model can be used in the scientific community by researchers and practitioners to predict the axial strength of FRP-strengthened corroded columns. In addition, the GPR and SVM models obtained an accuracy of 98.26% and 97.99%, respectively.
MODULE 5 BUILDING PLANNING AND DESIGN SY BTECH ACOUSTICS SYSTEM IN BUILDINGDr. BASWESHWAR JIRWANKAR
: Introduction to Acoustics & Green Building -
Absorption of sound, various materials, Sabine’s formula, optimum reverberation time, conditions for good acoustics Sound insulation:
Acceptable noise levels, noise prevention at its source, transmission of noise, Noise control-general considerations
Green Building: Concept, Principles, Materials, Characteristics, Applications
UNIT-4-PPT UNIT COMMITMENT AND ECONOMIC DISPATCHSridhar191373
Statement of unit commitment problem-constraints: spinning reserve, thermal unit constraints, hydro constraints, fuel constraints and other constraints. Solution methods: priority list methods, forward dynamic programming approach. Numerical problems only in priority list method using full load average production cost. Statement of economic dispatch problem-cost of generation-incremental cost curve –co-ordination equations without loss and with loss- solution by direct method and lamda iteration method (No derivation of loss coefficients)
2. Outline
• Introduction:
– Historical development
– S, Splus
– Capability
– Statistical Analysis
• References
• Calculator
• Data Type
• Resources
• Simulation and Statistical Tabl
es
– Probability distributions
• Programming
– Grouping, loops and conditional
execution
– Function
• Reading and writing data from
files
• Modeling
– Regression
– ANOVA
• Data Analysis on Association
–Lottery
–Geyser
• Smoothing
3. R, S and S-plus
• S: an interactive environment for data analysis developed at Bell
Laboratories since 1976
– 1988 - S2: RA Becker, JM Chambers, A Wilks
– 1992 - S3: JM Chambers, TJ Hastie
– 1998 - S4: JM Chambers
• Exclusively licensed by AT&T/Lucent to Insightful Corporation,
Seattle WA. Product name: “S-plus”.
• Implementation languages C, Fortran.
• See:
http://cm.bell-labs.com/cm/ms/departments/sia/S/history.html
• R: initially written by Ross Ihaka and Robert Gentleman at Dep.
of Statistics of U of Auckland, New Zealand during 1990s.
• Since 1997: international “R-core” team of ca. 15 people with
access to common CVS archive.
4. Introduction
•R is “GNU S” — A language and environment for data manipula-tio
n, calculation and graphical display.
– R is similar to the award-winning S system, which was developed at Bell Labo
ratories by John Chambers et al.
– a suite of operators for calculations on arrays, in particular matrices,
– a large, coherent, integrated collection of intermediate tools for interactive data
analysis,
– graphical facilities for data analysis and display either directly at the computer
or on hardcopy
– a well developed programming language which includes conditionals, loops, u
ser defined recursive functions and input and output facilities.
•The core of R is an interpreted computer language.
– It allows branching and looping as well as modular programming using functio
ns.
– Most of the user-visible functions in R are written in R, calling upon a smaller
set of internal primitives.
– It is possible for the user to interface to procedures written in C, C++ or FORT
RAN languages for efficiency, and also to write additional primitives.
5. What R does and does not
odata handling and storage:
numeric, textual
omatrix algebra
ohash tables and regular
expressions
ohigh-level data analytic and
statistical functions
oclasses (“OO”)
ographics
oprogramming language:
loops, branching,
subroutines
ois not a database, but
connects to DBMSs
ohas no graphical user
interfaces, but connects to
Java, TclTk
olanguage interpreter can be
very slow, but allows to call
own C/C++ code
ono spreadsheet view of data,
but connects to
Excel/MsOffice
ono professional /
commercial support
6. R and statistics
o Packaging: a crucial infrastructure to efficiently produce, load
and keep consistent software libraries from (many) different
sources / authors
o Statistics: most packages deal with statistics and data analysis
o State of the art: many statistical researchers provide their
methods as R packages
7. Data Analysis and Presentation
• The R distribution contains functionality for large number of
statistical procedures.
– linear and generalized linear models
– nonlinear regression models
– time series analysis
– classical parametric and nonparametric tests
– clustering
– smoothing
• R also has a large set of functions which provide a flexible
graphical environment for creating various kinds of data
presentations.
8. References
• For R,
– The basic reference is The New S Language: A Programming Environment f
or Data Analysis and Graphics by Richard A. Becker, John M. Chambers an
d Allan R. Wilks (the “Blue Book”) .
– The new features of the 1991 release of S (S version 3) are covered in Statist
ical Models in S edited by John M. Chambers and Trevor J. Hastie (the “Wh
ite Book”).
– Classical and modern statistical techniques have been implemented.
• Some of these are built into the base R environment.
• Many are supplied as packages. There are about 8 packages supplied with R (ca
lled “standard” packages) and many more are available through the cran family o
f Internet sites (via http://cran.r-project.org).
• All the R functions have been documented in the form of help pag
es in an “output independent” form which can be used to create ve
rsions for HTML, LATEX, text etc.
– The document “An Introduction to R” provides a more user-friendly starting
point.
– An “R Language Definition” manual
– More specialized manuals on data import/export and extending R.
10. Object orientation
primitive (or: atomic) data types in R are:
• numeric (integer, double, complex)
• character
• logical
• function
out of these, vectors, arrays, lists can be built.
11. Object orientation
• Object: a collection of atomic variables and/or other objects that
belong together
• Example: a microarray experiment
• probe intensities
• patient data (tissue location, diagnosis, follow-up)
• gene data (sequence, IDs, annotation)
Parlance:
• class: the “abstract” definition of it
• object: a concrete instance
• method: other word for ‘function’
• slot: a component of an object
12. Object orientation
Advantages:
Encapsulation (can use the objects and methods someone else has
written without having to care about the internals)
Generic functions (e.g. plot, print)
Inheritance (hierarchical organization of complexity)
Caveat:
Overcomplicated, baroque program architecture…
13. variables
> a = 49
> sqrt(a)
[1] 7
> a = "The dog ate my homework"
> sub("dog","cat",a)
[1] "The cat ate my homework“
> a = (1+1==3)
> a
[1] FALSE
numeric
character
string
logical
14. vectors, matrices and arrays
• vector: an ordered collection of data of the same type
> a = c(1,2,3)
> a*2
[1] 2 4 6
• Example: the mean spot intensities of all 15488 spots on a chip:
a vector of 15488 numbers
• In R, a single number is the special case of a vector with 1
element.
• Other vector types: character strings, logical
15. vectors, matrices and arrays
• matrix: a rectangular table of data of the same type
• example: the expression values for 10000 genes for 30 tissue
biopsies: a matrix with 10000 rows and 30 columns.
• array: 3-,4-,..dimensional matrix
• example: the red and green foreground and background values
for 20000 spots on 120 chips: a 4 x 20000 x 120 (3D) array.
16. Lists
• vector: an ordered collection of data of the same type.
> a = c(7,5,1)
> a[2]
[1] 5
• list: an ordered collection of data of arbitrary types.
> doe = list(name="john",age=28,married=F)
> doe$name
[1] "john“
> doe$age
[1] 28
• Typically, vector elements are accessed by their index (an integer),
list elements by their name (a character string). But both types
support both access methods.
17. Data frames
data frame: is supposed to represent the typical data table that
researchers come up with – like a spreadsheet.
It is a rectangular table with rows and columns; data within each
column has the same type (e.g. number, text, logical), but
different columns may have different types.
Example:
> a
localisation tumorsize progress
XX348 proximal 6.3 FALSE
XX234 distal 8.0 TRUE
XX987 proximal 10.0 FALSE
18. Factors
A character string can contain arbitrary text. Sometimes it is useful to use a limited
vocabulary, with a small number of allowed words. A factor is a variable that can only
take such a limited number of values, which are called levels.
> a
[1] Kolon(Rektum) Magen Magen
[4] Magen Magen Retroperitoneal
[7] Magen Magen(retrogastral) Magen
Levels: Kolon(Rektum) Magen Magen(retrogastral) Retroperitoneal
> class(a)
[1] "factor"
> as.character(a)
[1] "Kolon(Rektum)" "Magen" "Magen"
[4] "Magen" "Magen" "Retroperitoneal"
[7] "Magen" "Magen(retrogastral)" "Magen"
> as.integer(a)
[1] 1 2 2 2 2 4 2 3 2
> as.integer(as.character(a))
[1] NA NA NA NA NA NA NA NA NA NA NA NA
Warning message: NAs introduced by coercion
19. Subsetting
Individual elements of a vector, matrix, array or data frame are
accessed with “[ ]” by specifying their index, or their name
> a
localisation tumorsize progress
XX348 proximal 6.3 0
XX234 distal 8.0 1
XX987 proximal 10.0 0
> a[3, 2]
[1] 10
> a["XX987", "tumorsize"]
[1] 10
> a["XX987",]
localisation tumorsize progress
XX987 proximal 10 0
20. Subsetting
> a
localisation tumorsize progress
XX348 proximal 6.3 0
XX234 distal 8.0 1
XX987 proximal 10.0 0
> a[c(1,3),]
localisation tumorsize progress
XX348 proximal 6.3 0
XX987 proximal 10.0 0
> a[c(T,F,T),]
localisation tumorsize progress
XX348 proximal 6.3 0
XX987 proximal 10.0 0
> a$localisation
[1] "proximal" "distal" "proximal"
> a$localisation=="proximal"
[1] TRUE FALSE TRUE
> a[ a$localisation=="proximal", ]
localisation tumorsize progress
XX348 proximal 6.3 0
XX987 proximal 10.0 0
subset rows by a
vector of indices
subset rows by a
logical vector
subset a column
comparison resulting in
logical vector
subset the selected
rows
21. Resources
• A package specification allows the production of loadable modules
for specific purposes, and several contributed packages are made
available through the CRAN sites.
• CRAN and R homepage:
– http://www.r-project.org/
It is R’s central homepage, giving information on the R project and
everything related to it.
– http://cran.r-project.org/
It acts as the download area,carrying the software itself, extension packages,
PDF manuals.
• Getting help with functions and features
– help(solve)
– ?solve
– For a feature specified by special characters, the argument must be enclosed
in double or single quotes, making it a “character string”: help("[[")
22. Getting help
Details about a specific command whose name you know (input
arguments, options, algorithm, results):
>? t.test
or
>help(t.test)
23. Getting help
o HTML search engine
o Search for topics
with regular
expressions:
“help.search”
24. Probability distributions
• Cumulative distribution function P(X ≤ x): ‘p’ for the CDF
• Probability density function: ‘d’ for the density,,
• Quantile function (given q, the smallest x such that P(X ≤ x) > q): ‘
q’ for the quantile
• simulate from the distribution: ‘r
Distribution R name additional arguments
beta beta shape1, shape2, ncp
binomial binom size, prob
Cauchy cauchy location, scale
chi-squared chisq df, ncp
exponential exp rate
F f df1, df1, ncp
gamma gamma shape, scale
geometric geom prob
hypergeometric hyper m, n, k
log-normal lnorm meanlog, sdlog
logistic logis; negative binomial nbinom; normal norm; Poisson pois; Student’s t
t ; uniform unif; Weibull weibull; Wilcoxon wilcox
25. Grouping, loops and conditional execution
• Grouped expressions
– R is an expression language in the sense that its only command type is a fun
ction or expression which returns a result.
– Commands may be grouped together in braces, {expr 1, . . ., expr m}, in wh
ich case the value of the group is the result of the last expression in the grou
p evaluated.
• Control statements
– if statements
– The language has available a conditional construction of the form
if (expr 1) expr 2 else expr 3
where expr 1 must evaluate to a logical value and the result of the entire ex
pression is then evident.
– a vectorized version of the if/else construct, the ifelse function. This has the
form ifelse(condition, a, b)
26. Repetitive execution
• for loops, repeat and while
–for (name in expr 1) expr 2
where name is the loop variable. expr 1 is a vector expression,
(often a sequence like 1:20), and expr 2 is often a grouped expr
ession with its sub-expressions written in terms of the dummy n
ame. expr 2 is repeatedly evaluated as name ranges through the
values in the vector result of expr 1.
• Other looping facilities include the
–repeat expr statement and the
–while (condition) expr statement.
–The break statement can be used to terminate any loop, possibly
abnormally. This is the only way to terminate repeat loops.
–The next statement can be used to discontinue one particular cy
cle and skip to the “next”.
28. Loops
• When the same or similar tasks need to be performed multiple
times; for all elements of a list; for all columns of an array; etc.
• Monte Carlo Simulation
• Cross-Validation (delete one and etc)
for(i in 1:10) {
print(i*i)
}
i=1
while(i<=10) {
print(i*i)
i=i+sqrt(i)
}
29. lapply, sapply, apply
• When the same or similar tasks need to be performed multiple
times for all elements of a list or for all columns of an array.
• May be easier and faster than “for” loops
• lapply(li, function )
• To each element of the list li, the function function is applied.
• The result is a list whose elements are the individual function
results.
> li = list("klaus","martin","georg")
> lapply(li, toupper)
> [[1]]
> [1] "KLAUS"
> [[2]]
> [1] "MARTIN"
> [[3]]
> [1] "GEORG"
30. lapply, sapply, apply
sapply( li, fct )
Like apply, but tries to simplify the result, by converting it into a
vector or array of appropriate size
> li = list("klaus","martin","georg")
> sapply(li, toupper)
[1] "KLAUS" "MARTIN" "GEORG"
> fct = function(x) { return(c(x, x*x, x*x*x)) }
> sapply(1:5, fct)
[,1] [,2] [,3] [,4] [,5]
[1,] 1 2 3 4 5
[2,] 1 4 9 16 25
[3,] 1 8 27 64 125
31. apply
apply( arr, margin, fct )
Apply the function fct along some dimensions of the array arr,
according to margin, and return a vector or array of the
appropriate size.
> x
[,1] [,2] [,3]
[1,] 5 7 0
[2,] 7 9 8
[3,] 4 6 7
[4,] 6 3 5
> apply(x, 1, sum)
[1] 12 24 17 14
> apply(x, 2, sum)
[1] 22 25 20
32. functions and operators
Functions do things with data
“Input”: function arguments (0,1,2,…)
“Output”: function result (exactly one)
Example:
add = function(a,b)
{ result = a+b
return(result) }
Operators:
Short-cut writing for frequently used functions of one or two
arguments.
Examples: + - * / ! & | %%
33. functions and operators
• Functions do things with data
• “Input”: function arguments (0,1,2,…)
• “Output”: function result (exactly one)
Exceptions to the rule:
• Functions may also use data that sits around in other places, not
just in their argument list: “scoping rules”*
• Functions may also do other things than returning a result. E.g.,
plot something on the screen: “side effects”
* Lexical scope and Statistical Computing.
R. Gentleman, R. Ihaka, Journal of Computational and
Graphical Statistics, 9(3), p. 491-508 (2000).
34. Reading data from files
• The read.table() function
– To read an entire data frame directly, the external file will normally have a
special form.
– The first line of the file should have a name for each variable in the data
frame.
– Each additional line of the file has its first item a row label and the values for
each variable.
Price Floor Area Rooms Age Cent.heat
01 52.00 111.0 830 5 6.2 no
02 54.75 128.0 710 5 7.5 no
03 57.50 101.0 1000 5 4.2 no
04 57.50 131.0 690 6 8.8 no
05 59.75 93.0 900 5 1.9 yes
...
• numeric variables and nonnumeric variables (factors)
35. Reading data from files
• HousePrice <- read.table("houses.data", header=TRUE)
Price Floor Area Rooms Age Cent.heat
52.00 111.0 830 5 6.2 no
54.75 128.0 710 5 7.5 no
57.50 101.0 1000 5 4.2 no
57.50 131.0 690 6 8.8 no
59.75 93.0 900 5 1.9 yes
...
• The data file is named ‘input.dat’.
– Suppose the data vectors are of equal length and are to be read in in parallel.
– Suppose that there are three vectors, the first of mode character and the remaining
two of mode numeric.
• The scan() function
– inp<- scan("input.dat", list("",0,0))
– To separate the data items into three separate vectors, use assignments like
label <- inp[[1]]; x <- inp[[2]]; y <- inp[[3]]
– inp <- scan("input.dat", list(id="", x=0, y=0)); inp$id; inp$x; inp$y
36. Storing data
• Every R object can be stored into and restored from a file with
the commands “save” and “load”.
• This uses the XDR (external data representation) standard of
Sun Microsystems and others, and is portable between MS-
Windows, Unix, Mac.
> save(x, file=“x.Rdata”)
> load(“x.Rdata”)
37. Importing and exporting data
There are many ways to get data into R and out of R.
Most programs (e.g. Excel), as well as humans, know how to deal
with rectangular tables in the form of tab-delimited text files.
> x = read.delim(“filename.txt”)
also: read.table, read.csv
> write.table(x, file=“x.txt”, sep=“t”)
38. Importing data: caveats
Type conversions: by default, the read functions try to guess and
autoconvert the data types of the different columns (e.g. number,
factor, character).
There are options as.is and colClasses to control this – read
the online help
Special characters: the delimiter character (space, comma,
tabulator) and the end-of-line character cannot be part of a data
field.
To circumvent this, text may be “quoted”.
However, if this option is used (the default), then the quote
characters themselves cannot be part of a data field. Except if
they themselves are within quotes…
Understand the conventions your input files use and set the
quote options accordingly.
39. Statistical models in R
• Regression analysis
– a linear regression model with independent homoscedastic errors
• The analysis of variance (ANOVA)
– Predictors are now all categorical/ qualitative.
– The name Analysis of Variance is used because the original thinking was to
try to partition the overall variance in the response to that due to each of the
factors and the error.
– Predictors are now typically called factors which have some number of level
s.
– The parameters are now often called effects.
– The parameters are considered fixed but unknown —called fixed-effects mod
els but random-effects models are also used where parameters are taken to b
e random variables.
40. One-Way ANOVA
• The model
–Given a factor occurring at i =1,…,I levels, with j = 1 ,…,Ji observations
per level. We use the model
–yij = µ+ i + ij, i =1,…,I , j = 1 ,…,Ji
• Not all the parameters are identifiable and some restriction is neces
sary:
–Set µ=0 and use I different dummy variables.
–Set 1 = 0 — this corresponds to treatment contrasts
–Set Jii = 0 — ensure orthogonality
• Generalized linear models
• Nonlinear regression
41. Two-Way Anova
• The model yijk = µ+ i + j + ()i j+ ijk.
– We have two factors, at I levels and at J levels.
– Let nij be the number of observations at level i of and level j of and let t
hose observations be yij1, yij2,…. A complete layout has nij 1 for all i, j.
• The interaction effect ()i j is interpreted as that part of the mean r
esponse not attributable to the additive effect of i andj
.
– For example, you may enjoy strawberries and cream individually, but the co
mbination is superior.
– In contrast, you may like fish and ice cream but not together.
• As of an investigation of toxic agents, 48 rats were allocated to 3 p
oisons (I,II,III) and 4 treatments (A,B,C,D).
– The response was survival time in tens of hours. The Data:
42. Statistical Strategy and Model Uncertainty
• Strategy
– Diagnostics: Checking of assumptions: constant variance, linearity, normalit
y, outliers, influential points, serial correlation and collinearity.
– Transformation: Transforming the response — Box-Cox, transforming the pr
edictors — tests and polynomial regression.
– Variable selection: Stepwise and criterion based methods
• Avoid doing too much analysis.
– Remember that fitting the data well is no guarantee of good predictive perfor
mance or that the model is a good representation of the underlying population.
– Avoid complex models for small datasets.
– Try to obtain new data to validate your proposed model. Some people set asi
de some of their existing data for this purpose.
– Use past experience with similar data to guide the choice of model.
43. Simulation and Regression
• What is the sampling distribution of least squares estimates when t
he noises are not normally distributed?
• Assume the noises are independent and identically distributed.
1. Generate from the known error distribution.
2. Form y = X
Compute the estimate of .
• Repeat these three steps many times.
– We can estimate the sampling distribution of using the empirical distribution
of the generated , which we can estimate as accurately as we please by simpl
y running the simulation for long enough.
– This technique is useful for a theoretical investigation of the properties of a p
roposed new estimator. We can see how its performance compares to other e
stimators.
– It is of no value for the actual data since we don’t know the true error distrib
ution and we don’t know .
44. Bootstrap
• The bootstrap method mirrors the simulation method but uses
quantities we do know.
– Instead of sampling from the population distribution which we do not know
in practice, we resample from the data itself.
• Difficulty: is unknown and the distribution of is known.
• Solution: is replaced by its good estimate b and the distribution
of is replaced by the residuals e1,…,en.
1. Generate e* by sampling with replacement from e1,…,en.
2. Form y* = X be*
Compute b* from (X, y*).
• For small n, it is possible to compute b* for every possible samples
of e1,…,en. 1 n
– In practice, this number of bootstrap samples can be as small as 50 if all we
want is an estimate of the variance of our estimates but needs to be larger if
confidence intervals are wanted.
45. Implementation
• How do we take a sample of residuals with replacement?
– sample() is good for generating random samples of indices:
– sample(10,rep=T) leads to “7 9 9 2 5 7 4 1 8 9”
• Execute the bootstrap.
– Make a matrix to save the results in and then repeat the bootstrap process 100
0 times for a linear regression with five regressors:
bcoef <- matrix(0,1000,6)
–Program: for(i in 1:1000){
newy <- g$fit + g$res[sample(47, rep=T)]
brg <- lm(newy~y)
bcoef[i,] <- brg$coef
}
–Here g is the output from the data with regression a
nalysis.
46. Test and Confidence Interval
• To test the null hypothesis that H0 : 1 = 0 against the alternative H1
: 1 > 0, we may figure what fraction of the bootstrap sampled 1
were less than zero:
–length(bcoef[bcoef[,2]<0,2])/1000: It leads to 0.019.
–The p-value is 1.9% and we reject the null at the 5% level.
• We can also make a 95% confidence interval for this parameter by t
aking the empirical quantiles:
–quantile(bcoef[,2],c(0.025,0.975))
2.5% 97.5%
0.00099037 0.01292449
• We can get a better picture of the distribution by looking at the den
sity and marking the confidence interval:
–plot(density(bcoef[,2]),xlab="Coefficient of Race",main="")
–abline(v=quantile(bcoef[,2],c(0.025,0.975)))
52. Old Faithful Geyser in Yellowstone National Park
•研究目的:
– 便利遊客安排旅遊
– 瞭解 geyser 形成的原因,以便維護環境
• 數據:
– 收集於 1985 年 8 月 1 日至 1985 年 8 月 15 日
– waiting: time interval between the starts of successive eruptions, denote it by
wt
– duration: the duration of the subsequent eruption, denote it by dt.
– Some are recorded as L(ong), S(hort) and M(edium) during the night
w1 d1 w2 d2
– 由 dt 預測 wt+1( 迴歸分析 )
–In R, use help(faithful) to get more information on this data set.
–Load the data set by data(faithful).
• geyser<- matrix(scan("c:/geyser.txt"),byrow=F,ncol=2)
geyser.waiting<- geyser[,1]; geyser.duration<- geyser[,2]
hist(geyser.waiting)
53. Kernel Density Estimation
• The function `density' computes kernel density estimates with the gi
ven kernel and bandwidth.
– density(x, bw = "nrd0", adjust = 1, kernel = c("gaussian", "epanechnikov", "re
ctangular", "triangular", "biweight", "cosine", "optcosine"), window = kernel, w
idth, give.Rkern = FALSE, n = 512, from, to, cut = 3, na.rm = FALSE)
– n: the number of equally spaced points at which the density is to be estimated.
• hist(geyser.waiting,freq=FALSE)
lines(density(geyser.waiting))
plot(density(geyser.waiting))
lines(density(geyser.waiting,bw=10))
lines(density(geyser.waiting,bw=1,kernel=“e”))
• Show the kernels in the R parametrization
(kernels <- eval(formals(density)$kernel))
plot (density(0, bw = 1), xlab = "", main="R's density() kernels with bw = 1")
for(i in 2:length(kernels)) lines(density(0, bw = 1, kern = kernels[i]), col = i)
legend(1.5,.4, legend = kernels, col = seq(kernels), lty = 1, cex = .8, y.int = 1)
54. The Effect of Choice of Kernels
• The average amount of annual precipitation (rainfall) in inches for e
ach of 70 United States (and Puerto Rico) cities.
• data(precip)
• bw <- bw.SJ(precip) ## sensible automatic choice
• plot(density(precip, bw = bw, n = 2^13), main = "same sd bandwidt
hs, 7 different kernels")
• for(i in 2:length(kernels)) lines(density(precip, bw = bw, kern = ke
rnels[i], n = 2^13), col = i)
57. Explore Association
• Data(stackloss)
–It is a data frame with 21 observations on 4 variables.
– [,1] `Air Flow' Flow of cooling air
– [,2] `Water Temp' Cooling Water Inlet Temperature
– [,3] `Acid Conc.' Concentration of acid [per 1000, minus 500]
– [,4] `stack.loss' Stack loss
–The data sets `stack.x', a matrix with the first three (independent)
variables of the data frame, and `stack.loss', the numeric vector g
iving the fourth (dependent) variable, are provided as well.
• Scatterplots, scatterplot matrix:
–plot(stackloss$Ai,stackloss$W)
–plot(stackloss) data(stackloss)
–two quantitative variables.
• summary(lm.stack <- lm(stack.loss ~ stack.x))
• summary(lm.stack <- lm(stack.loss ~ stack.x))
58. Explore Association
• Boxplot suitable for showing
a quantitative and a qualitativ
e variable.
• The variable test is not quantit
ative but categorical.
– Such variables are also called fa
ctors.
59. LEAST SQUARES ESTIMATION
• Geometric representation of the estimation .
– The data vector Y is projected orthogonally onto the model space spanned by
X.
– The fit is represented by projection with the difference between the fit
and the data represented by the residual vector e.
ˆ
ˆ X
y
60. Hypothesis tests to compare models
• Given several predictors for a response, we might wonder whether al
l are needed.
– Consider a large model, , and a smaller model, , which consists of a subset
of the predictors that are in .
– By the principle of Occam’s Razor (also known as the law of parsimony), we’
d prefer to use if the data will support it.
– So we’ll take to represent the null hypothesis and to represent the alternati
ve.
– A geometric view of the problem may be seen in the following figure.