FiftyOne Teams Installation¶
FiftyOne Teams deployments come with a centralized FiftyOne Teams App and database that allows your entire team to collaborate securely on the same datasets. FiftyOne Teams is deployed entirely into your environment, either on-premises or in a private cloud. Your data never leaves your environment.
FiftyOne Teams can be deployed on a wide variety of infrastructure solutions, including Kubernetes and Docker.
Note
Detailed instructions for the initial FiftyOne Teams deployment, along with all necessary components, are made available by your Voxel51 CS engineer during the onboarding process.
Python SDK¶
While the FiftyOne Teams App allows for countless new App-centric workflows, any existing Python-based workflows that you’ve fallen in love with in the open-source version of FiftyOne are still directly applicable!
FiftyOne Teams requires an updated Python SDK, which is a wrapper around the open-source FiftyOne package that adds new functionality like support for cloud-backed media.
You can find the installation instructions under the “Install FiftyOne” section of the Teams App by clicking on your user icon in the upper right corner:

There you’ll see instructions for installing a fiftyone
package from the
private PyPI server as shown below:
pip install --index-url https://{$TOKEN}@pypi.fiftyone.ai fiftyone
Note
See Installation with Poetry if you use
poetry
instead of pip
.
Note
The Teams Python package is named fiftyone
and has the same module
structure as fiftyone, so any existing scripts you
built using open source will continue to run after you upgrade!
Next Steps¶
After installing the Teams Python SDK in your virtual environment, you’ll need to configure two things:
Your team’s API connection or MongoDB connection
The cloud credentials to access your cloud-backed media
That’s it! Any operations you perform will be stored in a centralized location and will be available to all users with access to the same datasets in the Teams App or their Python workflows.
Installation with Poetry¶
If you are using poetry to install your
dependencies rather than pip
, you will need to follow instructions in
the docs for installing from a private repository.
The two key points are specifying the additional private source and declaring
that the fiftyone
module should be found there and not the default PyPI
location.
Add source¶
In poetry v1.5, it is recommended to use an explicit package source.
poetry source add --priority=explicit fiftyone-teams https://pypi.fiftyone.ai/simple/
Prior to v1.5, you should use the deprecated secondary package source.
poetry source add --secondary fiftyone-teams https://pypi.fiftyone.ai/simple/
Configure credentials¶
poetry config http-basic.fiftyone-teams ${TOKEN} ""
Alternatively, you can specify the credentials in environment variables.
export POETRY_HTTP_BASIC_FIFTYONE_TEAMS_USERNAME="${TOKEN}"
export POETRY_HTTP_BASIC_FIFTYONE_TEAMS_PASSWORD=""
If you have trouble configuring the credentials, see more in the poetry docs here.
Add fiftyone dependency¶
Replace X.Y.Z
with the proper version
poetry add --source fiftyone-teams fiftyone==X.Y.Z
Note
Due to an unresolved misalignment
with poetry
and a FiftyOne dependency, kaleido
, you must add it
to your own dependencies as well:
poetry add kaleido==0.2.1
You should then see snippets in the pyproject.toml
file like the following
(the priority
line will be different for poetry<v1.5
):
[[tool.poetry.source]]
name = "fiftyone-teams"
url = "https://pypi.fiftyone.ai"
priority = "explicit"
[tool.poetry.dependencies]
fiftyone = {version = "X.Y.Z", source = "fiftyone-teams}
Cloud credentials¶
Cross-Origin Resource Sharing (CORS)¶
If your datasets will include cloud-backed point-cloud files or segmentation maps, you may also need to configure cross-origin resource sharing (CORS) for your cloud buckets. Details are provided below for each cloud platform.
Amazon S3¶
To work with FiftyOne datasets whose media are stored in Amazon S3, you simply need to provide AWS credentials to your Teams client with read access to the relevant objects and buckets.
You can do this in any of the following ways:
1. Configure/provide AWS credentials in accordance with the boto3 python library.
2. Permanently register AWS credentials on a particular machine by adding the following keys to your media cache config:
{
"aws_config_file": "/path/to/aws-config.ini",
"aws_profile": "default" # optional
}
In the above, the .ini
file should use the syntax of the
boto3 configuration file.
Note
FiftyOne Teams requires either the s3:ListBucket
or
s3:GetBucketLocation
permission in order to access objects in S3 buckets.
If you wish to use multi-account credentials, your credentials must have
the s3:ListBucket
permission, as s3:GetBucketLocation
does not support
this.
If you need to configure CORS on your AWS buckets, here is an example configuration:
[
{
"origin": ["https://fiftyone-teams-deployment.yourcompany.com"],
"method": ["GET", "HEAD"],
"responseHeader": ["*"],
"maxAgeSeconds": 86400
}
]
Google Cloud Storage¶
To work with FiftyOne datasets whose media are stored in Google Cloud Storage, you simply need to provide service account credentials to your Teams client with read access to the relevant objects and buckets.
You can register GCP credentials on a particular machine by adding the following key to your media cache config:
{
"google_application_credentials": "/path/to/gcp-service-account.json"
}
If you need to configure CORS on your GCP buckets, here is an example configuration:
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"GET",
"HEAD",
],
"AllowedOrigins": [
"https://fiftyone-teams-deployment.yourcompany.com"
],
"ExposeHeaders": [
"x-amz-server-side-encryption",
"x-amz-request-id",
"x-amz-id-2"
],
"MaxAgeSeconds": 3000
}
]
Microsoft Azure¶
To work with FiftyOne datasets whose media are stored in Azure Storage, you simply need to provide Azure credentials to your Teams client with read access to the relevant objects and containers.
You can do this in any of the following ways:
1. Permanently register Azure credentials on a particular machine by adding the following keys to your media cache config:
{
"azure_credentials_file": "/path/to/azure-credentials.ini",
"azure_profile": "default" # optional
}
2. Provide Azure credentials on a per-session basis by setting the following environment variables to point to your Azure credentials on disk:
export AZURE_CREDENTIALS_FILE=/path/to/azure-credentials.ini
export AZURE_PROFILE=default # optional
3. Provide your Azure credentials on a per-session basis by setting any group of environment variables shown below:
# Option 1
export AZURE_STORAGE_CONNECTION_STRING=...
export AZURE_ALIAS=... # optional
# Option 2
export AZURE_STORAGE_ACCOUNT=...
export AZURE_STORAGE_KEY=...
export AZURE_ALIAS=... # optional
# Option 3
export AZURE_STORAGE_ACCOUNT=...
export AZURE_CLIENT_ID=...
export AZURE_CLIENT_SECRET=...
export AZURE_TENANT_ID=...
export AZURE_ALIAS=... # optional
4. Provide your Azure credentials in any manner recognized by azure.identity.DefaultAzureCredential
In the options above, the .ini
file should have syntax similar to one of
the following:
[default]
conn_str = ...
alias = ... # optional
[default]
account_name = ...
account_key = ...
alias = ... # optional
[default]
account_name = ...
client_id = ...
secret = ...
tenant = ...
alias = ... # optional
When populating samples with Azure Storage filepaths, you can either specify paths by their full URL:
filepath = "https://${account_name}.blob.core.windows.net/container/path/to/object.ext"
# For example
filepath = "https://voxel51.blob.core.windows.net/test-container/image.jpg"
or, if you have defined an alias in your config, you may instead prefix the alias:
filepath = "${alias}://container/path/to/object.ext"
# For example
filepath = "az://test-container/image.jpg"
Note
If you use a
custom Azure domain,
you can provide it by setting the
AZURE_STORAGE_ACCOUNT_URL
environment variable or by including the
account_url
key in your credentials .ini
file.
MinIO¶
To work with FiftyOne datasets whose media are stored in MinIO, you simply need to provide the credentials to your Teams client with read access to the relevant objects and buckets.
You can do this in any of the following ways:
1. Permanently register MinIO credentials on a particular machine by adding the following keys to your media cache config:
{
"minio_config_file": "/path/to/minio-config.ini",
"minio_profile": "default" # optional
}
2. Provide MinIO credentials on a per-session basis by setting the following environment variables to point to your MinIO credentials on disk:
export MINIO_CONFIG_FILE=/path/to/minio-config.ini
export MINIO_PROFILE=default # optional
3. Provide your MinIO credentials on a per-session basis by setting the individual environment variables shown below:
export MINIO_ACCESS_KEY=...
export MINIO_SECRET_ACCESS_KEY=...
export MINIO_ENDPOINT_URL=...
export MINIO_ALIAS=... # optional
export MINIO_REGION=... # if applicable
In the options above, the .ini
file should have syntax similar the following:
[default]
access_key = ...
secret_access_key = ...
endpoint_url = ...
alias = ... # optional
region = ... # if applicable
When populating samples with MinIO filepaths, you can either specify paths by prefixing your MinIO endpoint URL:
filepath = "${endpoint_url}/bucket/path/to/object.ext"
# For example
filepath = "https://voxel51.min.io/test-bucket/image.jpg"
or, if you have defined an alias in your config, you may instead prefix the alias:
filepath = "${alias}://bucket/path/to/object.ext"
# For example
filepath = "minio://test-bucket/image.jpg"