tabs (Onglets)¶
https://sphinx-design.readthedocs.io/en/furo-theme/tabs.html
https://sphinx-design.readthedocs.io/en/latest/tabs.html#tabbed-code-examples
Introduction¶
Tabs organize and allow navigation between groups of content that are related and at the same level of hierarchy.
Each tab should contain content that is distinct from other tabs in a set.
See the Material Design description for further details.
Example from ray code (1)¶
.. tab-set-code::
.. literalinclude:: src/ray.py
:language: python
.. literalinclude:: src/ray.java
:language: java
.. literalinclude:: src/ray.c++
:language: c++
@ray.remote
class Counter(object):
def __init__(self):
self.value = 0
def increment(self):
self.value += 1
return self.value
def get_counter(self):
return self.value
# Create an actor from this class.
counter = Counter.remote()
// A regular Java class.
public class Counter {
private int value = 0;
public int increment() {
this.value += 1;
return this.value;
}
}
// Create an actor from this class.
// `Ray.actor` takes a factory method that can produce
// a `Counter` object. Here, we pass `Counter`'s constructor
// as the argument.
ActorHandle<Counter> counter = Ray.actor(Counter::new).remote();
// A regular C++ class.
class Counter {
private:
int value = 0;
public:
int Increment() {
value += 1;
return value;
}
};
// Factory function of Counter class.
static Counter *CreateCounter() {
return new Counter();
};
RAY_REMOTE(&Counter::Increment, CreateCounter);
// Create an actor from this class.
// `ray::Actor` takes a factory method that can produce
// a `Counter` object. Here, we pass `Counter`'s factory function
// as the argument.
auto counter = ray::Actor(CreateCounter).Remote();
Example from ray code (2)¶
.. tab-set::
.. tab-item:: NumPy (default)
The ``"numpy"`` option presents batches as ``Dict[str, np.ndarray]``, where the
`numpy.ndarray <https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html>`__
values represent a batch of record field values.
.. literalinclude:: ./doc_code/transforming_data.py
:language: python
:start-after: __writing_numpy_udfs_begin__
:end-before: __writing_numpy_udfs_end__
.. tab-item:: Pandas
The ``"pandas"`` batch format presents batches in
`pandas.DataFrame <https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html>`__
format.
.. literalinclude:: ./doc_code/transforming_data.py
:language: python
:start-after: __writing_pandas_udfs_begin__
:end-before: __writing_pandas_udfs_end__
.. tab-item:: PyArrow
The ``"pyarrow"`` batch format presents batches in
`pyarrow.Table <https://arrow.apache.org/docs/python/generated/pyarrow.Table.html>`__
format.
.. literalinclude:: ./doc_code/transforming_data.py
:language: python
:start-after: __writing_arrow_udfs_begin__
:end-before: __writing_arrow_udfs_end__
The "numpy"
option presents batches as Dict[str, np.ndarray]
, where the
numpy.ndarray
values represent a batch of record field values.
import ray
import numpy as np
from typing import Dict
ds = ray.data.read_csv("example://iris.csv")
def numpy_transform(batch: Dict[str, np.ndarray]) -> Dict[str, np.ndarray]:
new_col = batch["sepal.length"] / np.max(batch["sepal.length"])
batch["normalized.sepal.length"] = new_col
del batch["sepal.length"]
return batch
ds.map_batches(numpy_transform, batch_format="numpy").show(2)
# -> {'sepal.width': 3.2, 'petal.length': 4.7, 'petal.width': 1.4,
# 'variety': 'Versicolor', 'normalized.sepal.length': 1.0}
# -> {'sepal.width': 3.2, 'petal.length': 4.5, 'petal.width': 1.5,
# 'variety': 'Versicolor', 'normalized.sepal.length': 0.9142857142857144}
The "pandas"
batch format presents batches in
pandas.DataFrame
format.
import ray
import pandas as pd
ds = ray.data.read_csv("example://iris.csv")
def pandas_transform(df: pd.DataFrame) -> pd.DataFrame:
df.loc[:, "normalized.sepal.length"] = df["sepal.length"] / df["sepal.length"].max()
df = df.drop(columns=["sepal.length"])
return df
ds.map_batches(pandas_transform, batch_format="pandas").show(2)
# -> {'sepal.width': 3.2, 'petal.length': 4.7, 'petal.width': 1.4,
# 'variety': 'Versicolor', 'normalized.sepal.length': 1.0}
# -> {'sepal.width': 3.2, 'petal.length': 4.5, 'petal.width': 1.5,
# 'variety': 'Versicolor', 'normalized.sepal.length': 0.9142857142857144}
The "pyarrow"
batch format presents batches in
pyarrow.Table
format.
import ray
import pyarrow as pa
import pyarrow.compute as pac
ds = ray.data.read_csv("example://iris.csv")
def pyarrow_transform(batch: pa.Table) -> pa.Table:
batch = batch.append_column(
"normalized.sepal.length",
pac.divide(batch["sepal.length"], pac.max(batch["sepal.length"])),
)
return batch.drop(["sepal.length"])
ds.map_batches(pyarrow_transform, batch_format="pyarrow").show(2)
# -> {'sepal.width': 3.2, 'petal.length': 4.7, 'petal.width': 1.4,
# 'variety': 'Versicolor', 'normalized.sepal.length': 1.0}
# -> {'sepal.width': 3.2, 'petal.length': 4.5, 'petal.width': 1.5,
# 'variety': 'Versicolor', 'normalized.sepal.length': 0.9142857142857144}
Synchronised Tabs¶
Use the sync option to synchronise the selected tab items across multiple tab-sets.
Note, synchronisation requires that JavaScript is enabled.
Content 1
Content 2
Content 1
Content 2
Example from ray code ray-cluster-configuration¶
auth.ssh_private_key
¶
Not available.
The path to an existing public key for Ray to use.
Required: Yes
Importance: High
Type: String
Not available.
auth.ssh_public_key
¶
The cloud service provider. For AWS, this must be set to aws
.
Required: Yes
Importance: High
Type: String
The cloud service provider. For Azure, this must be set to azure
.
Required: Yes
Importance: High
Type: String
The cloud service provider. For GCP, this must be set to gcp
.
Required: Yes
Importance: High
Type: String
Minimal configuration¶
# An unique identifier for the head node and workers of this cluster.
cluster_name: aws-example-minimal
# Cloud-provider specific configuration.
provider:
type: aws
region: us-west-2
# An unique identifier for the head node and workers of this cluster.
cluster_name: minimal
# The maximum number of workers nodes to launch in addition to the head
# node. min_workers default to 0.
max_workers: 1
# Cloud-provider specific configuration.
provider:
type: azure
location: westus2
resource_group: ray-cluster
# How Ray will authenticate with newly launched nodes.
auth:
ssh_user: ubuntu
# you must specify paths to matching private and public key pair files
# use `ssh-keygen -t rsa -b 4096` to generate a new ssh key pair
ssh_private_key: ~/.ssh/id_rsa
# changes to this should match what is specified in file_mounts
ssh_public_key: ~/.ssh/id_rsa.pub
auth:
ssh_user: ubuntu
cluster_name: minimal
provider:
availability_zone: us-west1-a
project_id: null # TODO: set your GCP project ID here
region: us-west1
type: gcp
TPU Configuration¶
It is possible to use TPU VMs on GCP.
Currently, TPU pods (TPUs other than v2-8, v3-8 and v4-8) are not supported.
Before using a config with TPUs, ensure that the TPU API is enabled for your GCP project.
# This config is an example TPU config allowing you to run
# https://github.com/Yard1/swarm-jax on GCP TPUs
# Replace provider.project_id with your GCP project id
# After the nodes are up, run:
# ray attach tpu.yaml swarm_tpu_jax.py swarm-jax/data/enwik8 [NUM_TPUS] [EPOCHS]
# A unique identifier for the head node and workers of this cluster.
cluster_name: tputest
# The maximum number of worker nodes to launch in addition to the head
# node.
max_workers: 7
available_node_types:
ray_head_default:
resources: {"CPU": 2}
node_config:
machineType: n2-standard-2
disks:
- boot: true
autoDelete: true
type: PERSISTENT
initializeParams:
diskSizeGb: 50
# See https://cloud.google.com/compute/docs/images for more images
sourceImage: projects/deeplearning-platform-release/global/images/family/common-cpu
ray_tpu:
min_workers: 7
resources: {"TPU": 1} # use TPU custom resource in your code
node_config:
# Only v2-8, v3-8 and v4-8 accelerator types are currently supported.
# Support for TPU pods will be added in the future.
acceleratorType: v2-8
runtimeVersion: v2-alpha
schedulingConfig:
# Set to false to use non-preemptible TPUs
preemptible: true
provider:
type: gcp
region: us-central1
availability_zone: us-central1-b
project_id: null # replace with your GCP project id
setup_commands: []
# Specify the node type of the head node (as configured above).
# TPUs cannot be head nodes (will raise an exception).
head_node_type: ray_head_default
# Compute instances have python 3.7, but TPUs have 3.8 - need to update
# Install Jax and other dependencies on the Compute head node
head_setup_commands:
# Two first lines are a workaround for ssh timing out
- sleep 2
- sleep 2
- sudo chown -R $(whoami) /opt/conda/*
- conda create -y -n "ray" python=3.8.5
- conda activate ray && echo 'conda activate ray' >> ~/.bashrc
- python -m pip install --upgrade pip
- python -m pip install --upgrade "jax[cpu]==0.2.14"
- python -m pip install --upgrade fabric dataclasses optax==0.0.6 git+https://github.com/deepmind/dm-haiku google-api-python-client cryptography tensorboardX ray[default]
- python -m pip install -U https://s3-us-west-2.amazonaws.com/ray-wheels/latest/ray-3.0.0.dev0-cp38-cp38-manylinux2014_x86_64.whl
- git clone https://github.com/Yard1/swarm-jax.git && cd swarm-jax && python -m pip install .
# Install Jax and other dependencies on TPU
worker_setup_commands:
- pip3 install --upgrade pip
- pip3 install --upgrade "jax[tpu]==0.2.14" -f https://storage.googleapis.com/jax-releases/libtpu_releases.html
- pip3 install --upgrade fabric dataclasses optax==0.0.6 git+https://github.com/deepmind/dm-haiku tensorboardX ray[default]
- python3 -c "import jax; jax.device_count(); jax.numpy.add(1, 1)" # test if Jax has been installed correctly
- pip3 install -U https://s3-us-west-2.amazonaws.com/ray-wheels/latest/ray-3.0.0.dev0-cp38-cp38-manylinux2014_x86_64.whl
- git clone https://github.com/Yard1/swarm-jax.git && cd swarm-jax && sudo pip3 install .