You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The **cloudml** package provides an R interface to [Google Cloud Machine Learning Engine](https://cloud.google.com/vertex-ai), a managed service that enables:
9
9
10
-
* Scalable training of models built with the [keras](https://keras3.posit.co/), [tfestimators](https://tensorflow.rstudio.com/tfestimators), and [tensorflow](https://tensorflow.rstudio.com/) R packages.
10
+
* Scalable training of models built with the [keras](https://keras3.posit.co/), [tfestimators](https://github.com/rstudio/tfestimators), and [tensorflow](https://tensorflow.rstudio.com/) R packages.
11
11
12
12
* On-demand access to training on GPUs, including the new [Tesla P100 GPUs](https://www.nvidia.com/en-us/data-center/) from NVIDIA®.
13
13
14
14
* Hyperparameter tuning to optimize key attributes of model architectures in order to maximize predictive accuracy.
15
15
16
16
* Deployment of trained models to the Google global prediction platform that can support thousands of users and TBs of data.
17
17
18
-
CloudML is a managed service where you pay only for the hardware resources that you use. Prices vary depending on configuration (e.g. CPU vs. GPU vs. multiple GPUs). See <https://cloud.google.com/vertex-aipricing> for additional details.
18
+
CloudML is a managed service where you pay only for the hardware resources that you use. Prices vary depending on configuration (e.g. CPU vs. GPU vs. multiple GPUs). See <https://cloud.google.com/vertex-ai/pricing> for additional details.
19
19
20
-
For documentation on using the R interface to CloudML see the package website at <https://tensorflow.rstudio.com/tools/cloudml/>
20
+
For documentation on using the R interface to CloudML see the package website at <https://github.com/rstudio/cloudml>
Copy file name to clipboardExpand all lines: vignettes/deployment.Rmd
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -32,7 +32,7 @@ Cloud ML Engine can host your models so that you can get predictions from them i
32
32
### Exporting a SavedModel
33
33
34
34
The Cloud ML prediction service makes use of models exported through the
35
-
`export_savedmodel()` function which is available for models created using the [tensorflow](https://tensorflow.rstudio.com/tensorflow/), [keras](https://keras3.posit.co/) and
35
+
`export_savedmodel()` function which is available for models created using the [tensorflow](https://tensorflow.rstudio.com/), [keras](https://keras3.posit.co/) and
36
36
[tfestimators](https://github.com/rstudio/tfestimators) packages or any other tool that support the [tf.train.Saver](https://www.tensorflow.org/api_docs/python/tf/compat/v1/train/Saver) interface.
37
37
38
38
For instance, we can use `examples/keras/train.R` included in this package to define
The **cloudml** package provides an R interface to [Google Cloud Machine Learning Engine](https://cloud.google.com/vertex-ai), a managed service that enables:
31
31
32
-
* Scalable training of models built with the [keras](https://keras3.posit.co/), [tfestimators](https://tensorflow.rstudio.com/tfestimators), and [tensorflow](https://tensorflow.rstudio.com/) R packages.
32
+
* Scalable training of models built with the [keras](https://keras3.posit.co/), [tfestimators](https://github.com/rstudio/tfestimators), and [tensorflow](https://tensorflow.rstudio.com/) R packages.
33
33
34
-
* On-demand access to training on GPUs, including the new [Tesla P100 GPUs](https://www.nvidia.com/en-us/data-center/) from NVIDIA®.
34
+
* On-demand access to training on GPUs, including the new [Tesla P100 GPUs](https://www.nvidia.com/en-us/data-center/) from NVIDIA®.
35
35
36
36
* Hyperparameter tuning to optmize key attributes of model architectures in order to maximize predictive accuracy.
37
37
38
38
* Deployment of trained models to the Google global prediction platform that can support thousands of users and TBs of data.
39
39
40
-
CloudML is a managed service where you pay only for the hardware resources that you use. Prices vary depending on configuration (e.g. CPU vs. GPU vs. multiple GPUs). See <https://cloud.google.com/vertex-aipricing> for additional details.
40
+
CloudML is a managed service where you pay only for the hardware resources that you use. Prices vary depending on configuration (e.g. CPU vs. GPU vs. multiple GPUs). See <https://cloud.google.com/vertex-ai/pricing> for additional details.
41
41
42
42
<divstyle="height: 25px;"></div>
43
43
@@ -58,7 +58,7 @@ Start by installing the cloudml R package from CRAN as follows:
58
58
install.packages("cloudml")
59
59
```
60
60
61
-
Then, install the *Google Cloud SDK*, a set of utilties that enable you to interact with your Google Cloud account from within R. You can install the SDK using the `gcloud_install()` function.
61
+
Then, install the *Google Cloud SDK*, a set of utilties that enable you to interact with your Google Cloud account from within R. You can install the SDK using the `gcloud_install()` function.
62
62
63
63
```{r}
64
64
library(cloudml)
@@ -93,7 +93,7 @@ cloudml_train("train.R")
93
93
All of the files within the current working directory will be bundled up and sent along with the script to CloudML.
94
94
95
95
<divclass="bs-callout bs-callout-warning">
96
-
Note that the very first time you submit a job to CloudML the various packages required to run your script will be compiled from source. This will make the execution time of the job considerably longer that you might expect. It's only the first job that incurs this overhead though (since the package installations are cached), and subsequent jobs will run more quickly.
96
+
Note that the very first time you submit a job to CloudML the various packages required to run your script will be compiled from source. This will make the execution time of the job considerably longer that you might expect. It's only the first job that incurs this overhead though (since the package installations are cached), and subsequent jobs will run more quickly.
97
97
</div>
98
98
99
99
If you are using [RStudio v1.1](https://posit.co/download/rstudio-desktop/) or higher, then the CloudML training job is monitored (and it's results collected) using a background terminal:
@@ -112,7 +112,7 @@ You can list all previous runs as a data frame using the `ls_runs()` function:
@@ -141,7 +141,7 @@ There are many tools available to list, filter, and compare training runs. For a
141
141
142
142
## Training with a GPU
143
143
144
-
By default, CloudML utilizes "standard" CPU-based instances suitable for training simple models with small to moderate datasets. You can request the use of other machine types, including ones with GPUs, using the `master_type` parameter of `cloudml_train()`.
144
+
By default, CloudML utilizes "standard" CPU-based instances suitable for training simple models with small to moderate datasets. You can request the use of other machine types, including ones with GPUs, using the `master_type` parameter of `cloudml_train()`.
145
145
146
146
For example, the following would train the same model as above but with a [Tesla K80 GPU](http://www.nvidia.com/object/tesla-k80.html):
147
147
@@ -174,9 +174,3 @@ To learn more about using CloudML with R, see the following articles:
174
174
*[Google Cloud Storage](storage.html) provides information on copying data between your local machine and Google Storage and also describes how to use data within Google Storage during training.
175
175
176
176
*[Deploying Models](deployment.html) describes how to deploy trained models and generate predictions from them.
Copy file name to clipboardExpand all lines: vignettes/storage.Rmd
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -60,7 +60,7 @@ Note that to use these functions you need to import the cloudml package with `li
60
60
61
61
There are two distinct ways to read data from Google Storage. Which you use will depend on whether the TensorFlow API you are using supports direct references to `gs://` bucket URLs.
62
62
63
-
If you are using the [TensorFlow Datasets](https://tensorflow.rstudio.com/tools/tfdatasets/articles/introduction.html) API, then you can use `gs://` bucket URLs directly. In this case you'll want to use the `gs://` URL when running on CloudML, and a synchonized copy of the bucket when running locally. You can use the `gs_data_dir()` function to accomplish this. For example:
63
+
If you are using the [TensorFlow Datasets](https://tensorflow.rstudio.com/guides/tfdatasets/) API, then you can use `gs://` bucket URLs directly. In this case you'll want to use the `gs://` URL when running on CloudML, and a synchonized copy of the bucket when running locally. You can use the `gs_data_dir()` function to accomplish this. For example:
0 commit comments