Red Neuronal MNIST Para principiantes - TensorFlow 2 quickstart for beginners
Red Neuronal MNIST Para principiantes
El mejor punto de partida es la API secuencial, que es fácil de usar. Puedes crear modelos si conectas bloques de compilación. Ejecuta el ejemplo de "Hola, mundo" que se encuentra a continuación y, luego, consulta los instructivos para obtener más información.
Para aprender sobre el AA, visita nuestra página educativa. Comienza con capacitaciones seleccionadas para mejorar tus habilidades en áreas fundamentales del AA.
https://www.tensorflow.org/overview
import tensorflow as tf
mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=5)
model.evaluate(x_test, y_test)
TensorFlow 2 quickstart for beginners
This short introduction uses Keras to:
- Build a neural network that classifies images.
- Train this neural network.
- And, finally, evaluate the accuracy of the model.
This is a Google Colaboratory notebook file. Python programs are run directly in the browser—a great way to learn and use TensorFlow. To follow this tutorial, run the notebook in Google Colab by clicking the button at the top of this page.
- In Colab, connect to a Python runtime: At the top-right of the menu bar, select CONNECT.
- Run all the notebook code cells: Select Runtime > Run all.
Download and install TensorFlow 2. Import TensorFlow into your program:
Note: Upgrade pip
to install the TensorFlow 2 package. See the install guide for details.
Load and prepare the MNIST dataset. Convert the samples from integers to floating-point numbers:
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz 11493376/11490434 [==============================] - 0s 0us/step
Build the tf.keras.Sequential
model by stacking layers. Choose an optimizer and loss function for training:
array([[-0.10128427, -0.14768334, 0.61712044, 0.61311674, -0.25687024, -0.24292384, 0.23817754, -0.5163672 , -0.28512543, -0.35918558]], dtype=float32)
The tf.nn.softmax
function converts these logits to "probabilities" for each class:
array([[0.08739848, 0.08343592, 0.17926814, 0.17855184, 0.07480554, 0.07585612, 0.12272422, 0.05770795, 0.07272148, 0.06753031]], dtype=float32)
Note: It is possible to bake this tf.nn.softmax
in as the activation function for the last layer of the network. While this can make the model output more directly interpretable, this approach is discouraged as it's impossible to provide an exact and numerically stable loss calculation for all models when using a softmax output.
The losses.SparseCategoricalCrossentropy
loss takes a vector of logits and a True
index and returns a scalar loss for each example.
This loss is equal to the negative log probability of the true class: It is zero if the model is sure of the correct class.
This untrained model gives probabilities close to random (1/10 for each class), so the initial loss should be close to -tf.log(1/10) ~= 2.3
.
2.578917
The Model.fit
method adjusts the model parameters to minimize the loss:
Epoch 1/5 1875/1875 [==============================] - 3s 2ms/step - loss: 0.2970 - accuracy: 0.9146 Epoch 2/5 1875/1875 [==============================] - 3s 2ms/step - loss: 0.1473 - accuracy: 0.9563 Epoch 3/5 1875/1875 [==============================] - 3s 2ms/step - loss: 0.1096 - accuracy: 0.9671 Epoch 4/5 1875/1875 [==============================] - 3s 2ms/step - loss: 0.0897 - accuracy: 0.9719 Epoch 5/5 1875/1875 [==============================] - 3s 2ms/step - loss: 0.0752 - accuracy: 0.9760
<tensorflow.python.keras.callbacks.History at 0x7fd755cb47f0>
The Model.evaluate
method checks the models performance, usually on a "Validation-set" or "Test-set".
313/313 - 0s - loss: 0.0770 - accuracy: 0.9775
[0.07695907354354858, 0.9775000214576721]
The image classifier is now trained to ~98% accuracy on this dataset. To learn more, read the TensorFlow tutorials.
If you want your model to return a probability, you can wrap the trained model, and attach the softmax to it:
<tf.Tensor: shape=(5, 10), dtype=float32, numpy= array([[2.1094127e-08, 1.3406939e-08, 2.6930111e-06, 1.0430716e-04, 1.1646294e-10, 7.5275358e-07, 4.9344961e-14, 9.9987352e-01, 4.5862681e-08, 1.8621193e-05], [6.6366977e-07, 6.6512353e-06, 9.9998164e-01, 1.4112009e-06, 8.1197561e-15, 4.9681449e-07, 1.4861804e-06, 4.9511540e-13, 7.6379356e-06, 4.1590975e-14], [1.5588105e-06, 9.9926704e-01, 8.1892227e-05, 2.6095991e-05, 1.2973814e-05, 5.7134803e-06, 2.4758489e-05, 4.2972015e-04, 1.4964781e-04, 6.4341845e-07], [9.9977213e-01, 1.5741728e-09, 7.8205092e-05, 1.2229310e-07, 9.9353588e-07, 2.4155740e-06, 1.3811085e-04, 1.5705807e-06, 2.3177182e-07, 6.2728159e-06], [1.7277871e-05, 1.0273905e-09, 4.3157845e-05, 3.0481058e-08, 9.9705887e-01, 6.6911116e-06, 8.1449107e-06, 4.3538297e-05, 2.4825749e-06, 2.8197672e-03]], dtype=float32)>
No hay comentarios:
Publicar un comentario