R Tutorial
Fundamentals of R
Variables
Input and Output
Decision Making
Control Flow
Functions
Strings
Vectors
Lists
Arrays
Matrices
Factors
DataFrames
Object Oriented Programming
Error Handling
File Handling
Packages in R
Data Interfaces
Data Visualization
Statistics
Machine Learning with R
In this tutorial, we will walk through the creation and training of a multi-layered neural network (also known as a deep feedforward network) in R using the keras
package, which is an R interface to the popular deep learning Python library.
Before we can use keras
in R, it requires both R and Python installations. It's recommended to have Python and TensorFlow already set up on your machine.
To install keras
in R, you can use:
install.packages("keras")
Then, you can install TensorFlow via keras
:
library(keras) install_keras()
We will create a model for the classic MNIST dataset, a set of handwritten digits.
data <- dataset_mnist() train_images <- data$train$x train_labels <- data$train$y test_images <- data$test$x test_labels <- data$test$y
train_images <- array_reshape(train_images, c(60000, 28 * 28)) train_images <- train_images / 255 test_images <- array_reshape(test_images, c(10000, 28 * 28)) test_images <- test_images / 255 train_labels <- to_categorical(train_labels) test_labels <- to_categorical(test_labels)
model <- keras_model_sequential() %>% layer_dense(units = 512, activation = 'relu', input_shape = c(28 * 28)) %>% layer_dense(units = 128, activation = 'relu') %>% layer_dense(units = 10, activation = 'softmax') model %>% compile( optimizer = 'rmsprop', loss = 'categorical_crossentropy', metrics = c('accuracy') )
Here, we've set up a neural network with three layers:
relu
activationrelu
activationsoftmax
activationhistory <- model %>% fit( train_images, train_labels, epochs = 5, batch_size = 128, validation_split = 0.2 )
model %>% evaluate(test_images, test_labels)
You can visualize the training progress by plotting the metrics stored in the history
object:
plot(history)
This is a basic introduction to creating multi-layered neural networks in R using keras
. The keras
package provides a flexible and powerful way to create deep learning models in R, leveraging the capabilities of the underlying TensorFlow library. Remember, the architecture and hyperparameters chosen in this example are just a starting point; in practice, these would need tuning for optimal performance.
Building multi-layered neural networks in R:
Overview: Introduce the concept of multi-layered neural networks and the steps involved in building them in R.
Code:
# Using the neuralnet package to build a multi-layered neural network library(neuralnet) # Sample data data <- data.frame( input1 = c(0, 1, 0, 1), input2 = c(0, 0, 1, 1), output = c(0, 1, 1, 0) ) # Building a neural network with one hidden layer neural_network <- neuralnet(output ~ input1 + input2, data = data, hidden = c(3), linear.output = FALSE) # Printing the neural network print(neural_network)
Training and testing multi-layered networks in R:
Overview: Cover the steps involved in training and testing multi-layered neural networks in R.
Code:
# Training and testing a neural network in R set.seed(123) train_indices <- sample(1:nrow(data), 0.7 * nrow(data)) # Splitting data into training and testing sets train_data <- data[train_indices, ] test_data <- data[-train_indices, ] # Building and training the neural network neural_network <- neuralnet(output ~ input1 + input2, data = train_data, hidden = c(3), linear.output = FALSE) # Testing the neural network on the test set predictions <- predict(neural_network, newdata = test_data)