Showing build.log
Warning: Permanently added 'localhost' (ED25519) to the list of known hosts.
Warning: Permanently added 'localhost' (ED25519) to the list of known hosts.
Warning: Permanently added 'localhost' (ED25519) to the list of known hosts.
[15:28:07 PST] Transferring environment variable script to build VM
[15:28:09 PST] Transferring kokoro_log_reader.py to build VM
[15:28:10 PST] Transferring source code to build VM
[15:28:26 PST] Executing build script on build VM
[ID: 4692988] Executing command via SSH:
export KOKORO_BUILD_NUMBER="3360"
export KOKORO_JOB_NAME="cloud-devrel/client-libraries/python/googleapis/python-aiplatform/continuous/system"
source /tmpfs/kokoro-env_vars.sh; cd /tmpfs/src/ ; chmod 755 github/python-aiplatform/.kokoro/trampoline.sh ; PYTHON_3_VERSION="$(pyenv which python3 2> /dev/null || which python3)" ; PYTHON_2_VERSION="$(pyenv which python2 2> /dev/null || which python2)" ; if "$PYTHON_3_VERSION" -c "import psutil" ; then KOKORO_PYTHON_COMMAND="$PYTHON_3_VERSION" ; else KOKORO_PYTHON_COMMAND="$PYTHON_2_VERSION" ; fi > /dev/null 2>&1 ; echo "export KOKORO_PYTHON_COMMAND="$KOKORO_PYTHON_COMMAND"" > "$HOME/.kokoro_python_vars" ; nohup bash -c "( rm -f /tmpfs/kokoro_build_exit_code ; github/python-aiplatform/.kokoro/trampoline.sh ; echo \${PIPESTATUS[0]} > /tmpfs/kokoro_build_exit_code ) > /tmpfs/kokoro_build.log 2>&1" > /dev/null 2>&1 & echo $! > /tmpfs/kokoro_build.pid ; source "$HOME/.kokoro_python_vars" ; "$KOKORO_PYTHON_COMMAND" /tmpfs/kokoro_log_reader.py /tmpfs/kokoro_build.log /tmpfs/kokoro_build_exit_code /tmpfs/kokoro_build.pid /tmpfs/kokoro_log_reader.pid --start_byte 0
2025-02-27 15:28:27 Creating folder on disk for secrets: /tmpfs/src/gfile/secret_manager
Activated service account credentials for: [kokoro-trampoline@cloud-devrel-kokoro-resources.iam.gserviceaccount.com]
WARNING: Your config file at [/home/kbuilder/.docker/config.json] contains these credential helper entries:
{
"credHelpers": {
"gcr.io": "gcr",
"us.gcr.io": "gcr",
"asia.gcr.io": "gcr",
"staging-k8s.gcr.io": "gcr",
"eu.gcr.io": "gcr"
}
}
These will be overwritten.
Docker configuration file updated.
Using default tag: latest
latest: Pulling from cloud-devrel-kokoro-resources/python-multi
5a7813e071bf: Pulling fs layer
5bab59fe6e67: Pulling fs layer
02a6f2ac2d4c: Pulling fs layer
6af90c9d3625: Pulling fs layer
049f2eb8c8be: Pulling fs layer
65d8210370a5: Pulling fs layer
0352f077f465: Pulling fs layer
150f5cbc6aa2: Pulling fs layer
338f84460d46: Pulling fs layer
00b854a9b79a: Pulling fs layer
1ad98d3c56ed: Pulling fs layer
1eb7585bc5a4: Pulling fs layer
46438281e057: Pulling fs layer
7a2b780b0b45: Pulling fs layer
6a4fe6a4959b: Pulling fs layer
1e67f36f119a: Pulling fs layer
563fc835147d: Pulling fs layer
169e8f3c3896: Pulling fs layer
8d4b9fa9f27f: Pulling fs layer
7378e45ffd32: Pulling fs layer
592f8b92015d: Pulling fs layer
946b68838e6d: Pulling fs layer
1c0e8af9a05b: Pulling fs layer
9287c82964ca: Pulling fs layer
e2d997586dc0: Pulling fs layer
65d8210370a5: Waiting
0352f077f465: Waiting
150f5cbc6aa2: Waiting
338f84460d46: Waiting
169e8f3c3896: Waiting
00b854a9b79a: Waiting
8d4b9fa9f27f: Waiting
7378e45ffd32: Waiting
592f8b92015d: Waiting
946b68838e6d: Waiting
1c0e8af9a05b: Waiting
9287c82964ca: Waiting
e2d997586dc0: Waiting
6af90c9d3625: Waiting
1ad98d3c56ed: Waiting
049f2eb8c8be: Waiting
1eb7585bc5a4: Waiting
46438281e057: Waiting
7a2b780b0b45: Waiting
6a4fe6a4959b: Waiting
1e67f36f119a: Waiting
563fc835147d: Waiting
02a6f2ac2d4c: Download complete
6af90c9d3625: Verifying Checksum
6af90c9d3625: Download complete
5a7813e071bf: Verifying Checksum
5a7813e071bf: Download complete
049f2eb8c8be: Verifying Checksum
049f2eb8c8be: Download complete
65d8210370a5: Verifying Checksum
65d8210370a5: Download complete
150f5cbc6aa2: Download complete
0352f077f465: Verifying Checksum
0352f077f465: Download complete
5a7813e071bf: Pull complete
00b854a9b79a: Verifying Checksum
00b854a9b79a: Download complete
5bab59fe6e67: Verifying Checksum
5bab59fe6e67: Download complete
1ad98d3c56ed: Verifying Checksum
46438281e057: Verifying Checksum
46438281e057: Download complete
1eb7585bc5a4: Verifying Checksum
1eb7585bc5a4: Download complete
6a4fe6a4959b: Verifying Checksum
6a4fe6a4959b: Download complete
7a2b780b0b45: Verifying Checksum
7a2b780b0b45: Download complete
563fc835147d: Verifying Checksum
563fc835147d: Download complete
1e67f36f119a: Verifying Checksum
1e67f36f119a: Download complete
169e8f3c3896: Download complete
7378e45ffd32: Verifying Checksum
7378e45ffd32: Download complete
8d4b9fa9f27f: Verifying Checksum
8d4b9fa9f27f: Download complete
592f8b92015d: Verifying Checksum
592f8b92015d: Download complete
946b68838e6d: Verifying Checksum
946b68838e6d: Download complete
9287c82964ca: Verifying Checksum
9287c82964ca: Download complete
1c0e8af9a05b: Verifying Checksum
1c0e8af9a05b: Download complete
e2d997586dc0: Verifying Checksum
e2d997586dc0: Download complete
338f84460d46: Download complete
5bab59fe6e67: Pull complete
02a6f2ac2d4c: Pull complete
6af90c9d3625: Pull complete
049f2eb8c8be: Pull complete
65d8210370a5: Pull complete
0352f077f465: Pull complete
150f5cbc6aa2: Pull complete
338f84460d46: Pull complete
00b854a9b79a: Pull complete
1ad98d3c56ed: Pull complete
1eb7585bc5a4: Pull complete
46438281e057: Pull complete
7a2b780b0b45: Pull complete
6a4fe6a4959b: Pull complete
1e67f36f119a: Pull complete
563fc835147d: Pull complete
169e8f3c3896: Pull complete
8d4b9fa9f27f: Pull complete
7378e45ffd32: Pull complete
592f8b92015d: Pull complete
946b68838e6d: Pull complete
1c0e8af9a05b: Pull complete
9287c82964ca: Pull complete
e2d997586dc0: Pull complete
Digest: sha256:647803a30a8b5edb405c939a25bf41644d72614a1360fd670746a62b73841c4e
Status: Downloaded newer image for gcr.io/cloud-devrel-kokoro-resources/python-multi:latest
gcr.io/cloud-devrel-kokoro-resources/python-multi:latest
Executing: docker run --rm --interactive --network=host --privileged --volume=/var/run/docker.sock:/var/run/docker.sock --workdir=/tmpfs/src --entrypoint=github/python-aiplatform/.kokoro/build.sh --env-file=/tmpfs/tmp/tmpyv0o28xe/envfile --volume=/tmpfs:/tmpfs gcr.io/cloud-devrel-kokoro-resources/python-multi
KOKORO_KEYSTORE_DIR=/tmpfs/src/keystore
KOKORO_GITHUB_COMMIT_URL=https://github.com/googleapis/python-aiplatform/commit/4c8c277066d6f56f49b99e769344c09356e87c3d
KOKORO_JOB_NAME=cloud-devrel/client-libraries/python/googleapis/python-aiplatform/continuous/system
KOKORO_JOB_CLUSTER=GCP_UBUNTU
KOKORO_GIT_COMMIT=4c8c277066d6f56f49b99e769344c09356e87c3d
KOKORO_BLAZE_DIR=/tmpfs/src/objfs
KOKORO_ROOT=/tmpfs
KOKORO_JOB_TYPE=CONTINUOUS_INTEGRATION
KOKORO_ROOT_DIR=/tmpfs/
KOKORO_BUILD_NUMBER=3360
KOKORO_JOB_POOL=yoshi-ubuntu
KOKORO_GITHUB_COMMIT=4c8c277066d6f56f49b99e769344c09356e87c3d
KOKORO_BUILD_INITIATOR=kokoro-github-subscriber
KOKORO_ARTIFACTS_DIR=/tmpfs/src
KOKORO_BUILD_ID=b7bc8b3a-cce1-4877-8d7e-947f38bb71cf
KOKORO_GFILE_DIR=/tmpfs/src/gfile
KOKORO_BUILD_CONFIG_DIR=
KOKORO_POSIX_ROOT=/tmpfs
KOKORO_BUILD_ARTIFACTS_SUBDIR=prod/cloud-devrel/client-libraries/python/googleapis/python-aiplatform/continuous/system/3360/20250227-152721
WARNING: Skipping nox-automation as it is not installed.
[notice] A new release of pip is available: 23.0.1 -> 25.0.1
[notice] To update, run: pip install --upgrade pip
2025.2.9
nox > Running session system-3.10
nox > Creating virtual environment (virtualenv) using python3.10 in .nox/system-3-10
nox > python -m pip install --pre 'grpcio!=1.52.0rc1'
nox > python -m pip install mock pytest google-cloud-testutils -c /tmpfs/src/github/python-aiplatform/testing/constraints-3.10.txt
nox > python -m pip install -e '.[testing]' -c /tmpfs/src/github/python-aiplatform/testing/constraints-3.10.txt
nox > py.test -v --junitxml=system_3.10_sponge_log.xml tests/system
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pytest_asyncio/plugin.py:207: PytestDeprecationWarning: The configuration option "asyncio_default_fixture_loop_scope" is unset.
The event loop scope for asynchronous fixtures will default to the fixture caching scope. Future versions of pytest-asyncio will default the loop scope for asynchronous fixtures to function scope. Set the default fixture loop scope explicitly in order to avoid unexpected behavior in the future. Valid fixture loop scopes are: "function", "class", "module", "package", "session"
warnings.warn(PytestDeprecationWarning(_DEFAULT_FIXTURE_LOOP_SCOPE_UNSET))
============================= test session starts ==============================
platform linux -- Python 3.10.15, pytest-8.3.4, pluggy-1.5.0 -- /tmpfs/src/github/python-aiplatform/.nox/system-3-10/bin/python
cachedir: .pytest_cache
rootdir: /tmpfs/src/github/python-aiplatform
plugins: xdist-3.3.1, anyio-3.7.1, asyncio-0.25.3
asyncio: mode=strict, asyncio_default_fixture_loop_scope=None
created: 16/16 workers
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pytest_asyncio/plugin.py:207: PytestDeprecationWarning: The configuration option "asyncio_default_fixture_loop_scope" is unset.
The event loop scope for asynchronous fixtures will default to the fixture caching scope. Future versions of pytest-asyncio will default the loop scope for asynchronous fixtures to function scope. Set the default fixture loop scope explicitly in order to avoid unexpected behavior in the future. Valid fixture loop scopes are: "function", "class", "module", "package", "session"
warnings.warn(PytestDeprecationWarning(_DEFAULT_FIXTURE_LOOP_SCOPE_UNSET))
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pytest_asyncio/plugin.py:207: PytestDeprecationWarning: The configuration option "asyncio_default_fixture_loop_scope" is unset.
The event loop scope for asynchronous fixtures will default to the fixture caching scope. Future versions of pytest-asyncio will default the loop scope for asynchronous fixtures to function scope. Set the default fixture loop scope explicitly in order to avoid unexpected behavior in the future. Valid fixture loop scopes are: "function", "class", "module", "package", "session"
warnings.warn(PytestDeprecationWarning(_DEFAULT_FIXTURE_LOOP_SCOPE_UNSET))
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pytest_asyncio/plugin.py:207: PytestDeprecationWarning: The configuration option "asyncio_default_fixture_loop_scope" is unset.
The event loop scope for asynchronous fixtures will default to the fixture caching scope. Future versions of pytest-asyncio will default the loop scope for asynchronous fixtures to function scope. Set the default fixture loop scope explicitly in order to avoid unexpected behavior in the future. Valid fixture loop scopes are: "function", "class", "module", "package", "session"
warnings.warn(PytestDeprecationWarning(_DEFAULT_FIXTURE_LOOP_SCOPE_UNSET))
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pytest_asyncio/plugin.py:207: PytestDeprecationWarning: The configuration option "asyncio_default_fixture_loop_scope" is unset.
The event loop scope for asynchronous fixtures will default to the fixture caching scope. Future versions of pytest-asyncio will default the loop scope for asynchronous fixtures to function scope. Set the default fixture loop scope explicitly in order to avoid unexpected behavior in the future. Valid fixture loop scopes are: "function", "class", "module", "package", "session"
warnings.warn(PytestDeprecationWarning(_DEFAULT_FIXTURE_LOOP_SCOPE_UNSET))
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pytest_asyncio/plugin.py:207: PytestDeprecationWarning: The configuration option "asyncio_default_fixture_loop_scope" is unset.
The event loop scope for asynchronous fixtures will default to the fixture caching scope. Future versions of pytest-asyncio will default the loop scope for asynchronous fixtures to function scope. Set the default fixture loop scope explicitly in order to avoid unexpected behavior in the future. Valid fixture loop scopes are: "function", "class", "module", "package", "session"
warnings.warn(PytestDeprecationWarning(_DEFAULT_FIXTURE_LOOP_SCOPE_UNSET))
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pytest_asyncio/plugin.py:207: PytestDeprecationWarning: The configuration option "asyncio_default_fixture_loop_scope" is unset.
The event loop scope for asynchronous fixtures will default to the fixture caching scope. Future versions of pytest-asyncio will default the loop scope for asynchronous fixtures to function scope. Set the default fixture loop scope explicitly in order to avoid unexpected behavior in the future. Valid fixture loop scopes are: "function", "class", "module", "package", "session"
warnings.warn(PytestDeprecationWarning(_DEFAULT_FIXTURE_LOOP_SCOPE_UNSET))
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pytest_asyncio/plugin.py:207: PytestDeprecationWarning: The configuration option "asyncio_default_fixture_loop_scope" is unset.
The event loop scope for asynchronous fixtures will default to the fixture caching scope. Future versions of pytest-asyncio will default the loop scope for asynchronous fixtures to function scope. Set the default fixture loop scope explicitly in order to avoid unexpected behavior in the future. Valid fixture loop scopes are: "function", "class", "module", "package", "session"
warnings.warn(PytestDeprecationWarning(_DEFAULT_FIXTURE_LOOP_SCOPE_UNSET))
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pytest_asyncio/plugin.py:207: PytestDeprecationWarning: The configuration option "asyncio_default_fixture_loop_scope" is unset.
The event loop scope for asynchronous fixtures will default to the fixture caching scope. Future versions of pytest-asyncio will default the loop scope for asynchronous fixtures to function scope. Set the default fixture loop scope explicitly in order to avoid unexpected behavior in the future. Valid fixture loop scopes are: "function", "class", "module", "package", "session"
warnings.warn(PytestDeprecationWarning(_DEFAULT_FIXTURE_LOOP_SCOPE_UNSET))
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pytest_asyncio/plugin.py:207: PytestDeprecationWarning: The configuration option "asyncio_default_fixture_loop_scope" is unset.
The event loop scope for asynchronous fixtures will default to the fixture caching scope. Future versions of pytest-asyncio will default the loop scope for asynchronous fixtures to function scope. Set the default fixture loop scope explicitly in order to avoid unexpected behavior in the future. Valid fixture loop scopes are: "function", "class", "module", "package", "session"
warnings.warn(PytestDeprecationWarning(_DEFAULT_FIXTURE_LOOP_SCOPE_UNSET))
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pytest_asyncio/plugin.py:207: PytestDeprecationWarning: The configuration option "asyncio_default_fixture_loop_scope" is unset.
The event loop scope for asynchronous fixtures will default to the fixture caching scope. Future versions of pytest-asyncio will default the loop scope for asynchronous fixtures to function scope. Set the default fixture loop scope explicitly in order to avoid unexpected behavior in the future. Valid fixture loop scopes are: "function", "class", "module", "package", "session"
warnings.warn(PytestDeprecationWarning(_DEFAULT_FIXTURE_LOOP_SCOPE_UNSET))
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pytest_asyncio/plugin.py:207: PytestDeprecationWarning: The configuration option "asyncio_default_fixture_loop_scope" is unset.
The event loop scope for asynchronous fixtures will default to the fixture caching scope. Future versions of pytest-asyncio will default the loop scope for asynchronous fixtures to function scope. Set the default fixture loop scope explicitly in order to avoid unexpected behavior in the future. Valid fixture loop scopes are: "function", "class", "module", "package", "session"
warnings.warn(PytestDeprecationWarning(_DEFAULT_FIXTURE_LOOP_SCOPE_UNSET))
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pytest_asyncio/plugin.py:207: PytestDeprecationWarning: The configuration option "asyncio_default_fixture_loop_scope" is unset.
The event loop scope for asynchronous fixtures will default to the fixture caching scope. Future versions of pytest-asyncio will default the loop scope for asynchronous fixtures to function scope. Set the default fixture loop scope explicitly in order to avoid unexpected behavior in the future. Valid fixture loop scopes are: "function", "class", "module", "package", "session"
warnings.warn(PytestDeprecationWarning(_DEFAULT_FIXTURE_LOOP_SCOPE_UNSET))
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pytest_asyncio/plugin.py:207: PytestDeprecationWarning: The configuration option "asyncio_default_fixture_loop_scope" is unset.
The event loop scope for asynchronous fixtures will default to the fixture caching scope. Future versions of pytest-asyncio will default the loop scope for asynchronous fixtures to function scope. Set the default fixture loop scope explicitly in order to avoid unexpected behavior in the future. Valid fixture loop scopes are: "function", "class", "module", "package", "session"
warnings.warn(PytestDeprecationWarning(_DEFAULT_FIXTURE_LOOP_SCOPE_UNSET))
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pytest_asyncio/plugin.py:207: PytestDeprecationWarning: The configuration option "asyncio_default_fixture_loop_scope" is unset.
The event loop scope for asynchronous fixtures will default to the fixture caching scope. Future versions of pytest-asyncio will default the loop scope for asynchronous fixtures to function scope. Set the default fixture loop scope explicitly in order to avoid unexpected behavior in the future. Valid fixture loop scopes are: "function", "class", "module", "package", "session"
warnings.warn(PytestDeprecationWarning(_DEFAULT_FIXTURE_LOOP_SCOPE_UNSET))
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pytest_asyncio/plugin.py:207: PytestDeprecationWarning: The configuration option "asyncio_default_fixture_loop_scope" is unset.
The event loop scope for asynchronous fixtures will default to the fixture caching scope. Future versions of pytest-asyncio will default the loop scope for asynchronous fixtures to function scope. Set the default fixture loop scope explicitly in order to avoid unexpected behavior in the future. Valid fixture loop scopes are: "function", "class", "module", "package", "session"
warnings.warn(PytestDeprecationWarning(_DEFAULT_FIXTURE_LOOP_SCOPE_UNSET))
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pytest_asyncio/plugin.py:207: PytestDeprecationWarning: The configuration option "asyncio_default_fixture_loop_scope" is unset.
The event loop scope for asynchronous fixtures will default to the fixture caching scope. Future versions of pytest-asyncio will default the loop scope for asynchronous fixtures to function scope. Set the default fixture loop scope explicitly in order to avoid unexpected behavior in the future. Valid fixture loop scopes are: "function", "class", "module", "package", "session"
warnings.warn(PytestDeprecationWarning(_DEFAULT_FIXTURE_LOOP_SCOPE_UNSET))
16 workers [248 items]
scheduling tests via LoadScopeScheduling
tests/system/aiplatform/test_autologging.py::TestAutologging::test_autologging_with_autorun_creation
tests/system/aiplatform/test_custom_job.py::TestCustomJob::test_from_local_script_prebuilt_container
tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_create_get_list_featurestore
tests/system/aiplatform/test_initializer.py::TestInitializer::test_init_calls_set_google_auth_default
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_text_generation[grpc]
tests/system/aiplatform/test_experiments.py::TestExperiments::test_create_experiment
tests/system/aiplatform/test_experiment_model.py::TestExperimentModel::test_sklearn_model
tests/system/aiplatform/test_matching_engine_index.py::TestMatchingEngine::test_create_get_list_matching_engine_index
tests/system/aiplatform/test_e2e_forecasting.py::TestEndToEndForecasting3::test_end_to_end_forecasting[TemporalFusionTransformerForecastingTrainingJob]
tests/system/aiplatform/test_e2e_forecasting.py::TestEndToEndForecasting1::test_end_to_end_forecasting[AutoMLForecastingTrainingJob]
tests/system/aiplatform/test_e2e_forecasting.py::TestEndToEndForecasting4::test_end_to_end_forecasting[TimeSeriesDenseEncoderForecastingTrainingJob]
tests/system/aiplatform/test_e2e_tabular.py::TestEndToEndTabular::test_end_to_end_tabular
tests/system/aiplatform/test_e2e_metadata_schema.py::TestMetadataSchema::test_system_dataset_artifact_create
tests/system/aiplatform/test_e2e_forecasting.py::TestEndToEndForecasting2::test_end_to_end_forecasting[SequenceToSequencePlusForecastingTrainingJob]
[gw13] [ 0%] PASSED tests/system/aiplatform/test_initializer.py::TestInitializer::test_init_calls_set_google_auth_default
tests/system/aiplatform/test_dataset.py::TestDataset::test_get_existing_dataset
tests/system/aiplatform/test_batch_prediction.py::TestBatchPredictionJob::test_model_monitoring
[gw3] [ 0%] SKIPPED tests/system/aiplatform/test_dataset.py::TestDataset::test_get_existing_dataset
tests/system/aiplatform/test_dataset.py::TestDataset::test_get_nonexistent_dataset
tests/system/aiplatform/test_initializer.py::TestInitializer::test_init_rest_async_incorrect_credentials
[gw13] [ 1%] PASSED tests/system/aiplatform/test_initializer.py::TestInitializer::test_init_rest_async_incorrect_credentials
tests/system/aiplatform/test_pipeline_job.py::TestPipelineJob::test_add_pipeline_job_to_experiment
[gw3] [ 1%] PASSED tests/system/aiplatform/test_dataset.py::TestDataset::test_get_nonexistent_dataset
tests/system/aiplatform/test_dataset.py::TestDataset::test_get_new_dataset_and_import
[gw8] [ 2%] PASSED tests/system/aiplatform/test_e2e_metadata_schema.py::TestMetadataSchema::test_system_dataset_artifact_create
tests/system/aiplatform/test_e2e_metadata_schema.py::TestMetadataSchema::test_google_dataset_artifact_create
[gw14] [ 2%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_text_generation[grpc]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_text_generation[rest]
[gw8] [ 2%] PASSED tests/system/aiplatform/test_e2e_metadata_schema.py::TestMetadataSchema::test_google_dataset_artifact_create
tests/system/aiplatform/test_e2e_metadata_schema.py::TestMetadataSchema::test_execution_create_using_system_schema_class
[gw8] [ 3%] PASSED tests/system/aiplatform/test_e2e_metadata_schema.py::TestMetadataSchema::test_execution_create_using_system_schema_class
[gw14] [ 3%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_text_generation[rest]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_text_generation_preview_count_tokens[grpc]
tests/system/aiplatform/test_project_id_inference.py::TestProjectIDInference::test_project_id_inference
[gw11] [ 4%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_create_experiment
tests/system/aiplatform/test_experiments.py::TestExperiments::test_get_experiment
[gw14] [ 4%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_text_generation_preview_count_tokens[grpc]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_text_generation_preview_count_tokens[rest]
[gw11] [ 4%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_get_experiment
tests/system/aiplatform/test_experiments.py::TestExperiments::test_start_run
[gw14] [ 5%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_text_generation_preview_count_tokens[rest]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_text_generation_model_predict_async
[gw14] [ 5%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_text_generation_model_predict_async
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_text_generation_streaming[grpc]
[gw14] [ 6%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_text_generation_streaming[grpc]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_text_generation_streaming[rest]
[gw10] [ 6%] PASSED tests/system/aiplatform/test_experiment_model.py::TestExperimentModel::test_sklearn_model
tests/system/aiplatform/test_experiment_model.py::TestExperimentModel::test_xgboost_booster_with_custom_uri
[gw11] [ 6%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_start_run
tests/system/aiplatform/test_experiments.py::TestExperiments::test_get_run
[gw14] [ 7%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_text_generation_streaming[rest]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_preview_text_generation_from_pretrained[grpc]
[gw11] [ 7%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_get_run
tests/system/aiplatform/test_experiments.py::TestExperiments::test_log_params
[gw10] [ 8%] FAILED tests/system/aiplatform/test_experiment_model.py::TestExperimentModel::test_xgboost_booster_with_custom_uri
tests/system/aiplatform/test_experiment_model.py::TestExperimentModel::test_xgboost_xgbmodel_with_custom_names
[gw14] [ 8%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_preview_text_generation_from_pretrained[grpc]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_preview_text_generation_from_pretrained[rest]
[gw10] [ 8%] FAILED tests/system/aiplatform/test_experiment_model.py::TestExperimentModel::test_xgboost_xgbmodel_with_custom_names
tests/system/aiplatform/test_experiment_model.py::TestExperimentModel::test_tensorflow_keras_model_with_input_example
[gw14] [ 9%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_preview_text_generation_from_pretrained[rest]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_chat_on_chat_model[grpc]
[gw11] [ 9%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_log_params
tests/system/aiplatform/test_experiments.py::TestExperiments::test_log_metrics
[gw14] [ 10%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_chat_on_chat_model[grpc]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_chat_on_chat_model[rest]
[gw0] [ 10%] FAILED tests/system/aiplatform/test_autologging.py::TestAutologging::test_autologging_with_autorun_creation
tests/system/aiplatform/test_autologging.py::TestAutologging::test_autologging_with_manual_run_creation
[gw11] [ 10%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_log_metrics
tests/system/aiplatform/test_experiments.py::TestExperiments::test_log_time_series_metrics
[gw14] [ 11%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_chat_on_chat_model[rest]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_chat_model_preview_count_tokens[grpc]
[gw10] [ 11%] FAILED tests/system/aiplatform/test_experiment_model.py::TestExperimentModel::test_tensorflow_keras_model_with_input_example
tests/system/aiplatform/test_experiment_model.py::TestExperimentModel::test_tensorflow_module_with_gpu_container
[gw14] [ 12%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_chat_model_preview_count_tokens[grpc]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_chat_model_preview_count_tokens[rest]
[gw14] [ 12%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_chat_model_preview_count_tokens[rest]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_chat_model_async
[gw11] [ 12%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_log_time_series_metrics
tests/system/aiplatform/test_experiments.py::TestExperiments::test_get_time_series_data_frame_batch_read_success
[gw14] [ 13%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_chat_model_async
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_chat_model_send_message_streaming[grpc]
[gw14] [ 13%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_chat_model_send_message_streaming[grpc]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_chat_model_send_message_streaming[rest]
[gw10] [ 14%] FAILED tests/system/aiplatform/test_experiment_model.py::TestExperimentModel::test_tensorflow_module_with_gpu_container
tests/system/aiplatform/test_experiment_model.py::TestExperimentModel::test_deploy_model_with_cpu_container
[gw14] [ 14%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_chat_model_send_message_streaming[rest]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_text_embedding[grpc]
[gw14] [ 14%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_text_embedding[grpc]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_text_embedding[rest]
[gw14] [ 15%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_text_embedding[rest]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_text_embedding_async
[gw14] [ 15%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_text_embedding_async
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_tuning[grpc]
[gw14] [ 16%] SKIPPED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_tuning[grpc]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_tuning[rest]
[gw14] [ 16%] SKIPPED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_tuning[rest]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_batch_prediction_for_text_generation[grpc]
[gw0] [ 16%] PASSED tests/system/aiplatform/test_autologging.py::TestAutologging::test_autologging_with_manual_run_creation
tests/system/aiplatform/test_autologging.py::TestAutologging::test_autologging_nested_run_model
[gw11] [ 17%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_get_time_series_data_frame_batch_read_success
tests/system/aiplatform/test_experiments.py::TestExperiments::test_log_classification_metrics
[gw0] [ 17%] PASSED tests/system/aiplatform/test_autologging.py::TestAutologging::test_autologging_nested_run_model
[gw11] [ 18%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_log_classification_metrics
tests/system/aiplatform/test_experiments.py::TestExperiments::test_log_model
[gw11] [ 18%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_log_model
tests/system/aiplatform/test_experiments.py::TestExperiments::test_create_artifact
[gw11] [ 18%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_create_artifact
tests/system/aiplatform/test_experiments.py::TestExperiments::test_get_artifact_by_uri
[gw11] [ 19%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_get_artifact_by_uri
tests/system/aiplatform/test_experiments.py::TestExperiments::test_log_execution_and_artifact
[gw14] [ 19%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_batch_prediction_for_text_generation[grpc]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_batch_prediction_for_text_generation[rest]
tests/system/aiplatform/test_tensorboard.py::TestTensorboard::test_create_and_get_tensorboard
[gw11] [ 20%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_log_execution_and_artifact
tests/system/aiplatform/test_experiments.py::TestExperiments::test_end_run
[gw11] [ 20%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_end_run
tests/system/aiplatform/test_experiments.py::TestExperiments::test_run_context_manager
[gw11] [ 20%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_run_context_manager
tests/system/aiplatform/test_experiments.py::TestExperiments::test_add_pipeline_job_to_experiment
[gw0] [ 21%] PASSED tests/system/aiplatform/test_tensorboard.py::TestTensorboard::test_create_and_get_tensorboard
tests/system/aiplatform/test_tensorboard.py::TestTensorboard::test_create_and_get_tensorboard_experiment
[gw0] [ 21%] PASSED tests/system/aiplatform/test_tensorboard.py::TestTensorboard::test_create_and_get_tensorboard_experiment
tests/system/aiplatform/test_tensorboard.py::TestTensorboard::test_create_and_get_tensorboard_run
[gw0] [ 22%] PASSED tests/system/aiplatform/test_tensorboard.py::TestTensorboard::test_create_and_get_tensorboard_run
tests/system/aiplatform/test_tensorboard.py::TestTensorboard::test_create_and_get_tensorboard_time_series
[gw0] [ 22%] PASSED tests/system/aiplatform/test_tensorboard.py::TestTensorboard::test_create_and_get_tensorboard_time_series
tests/system/aiplatform/test_tensorboard.py::TestTensorboard::test_write_tensorboard_scalar_data
[gw0] [ 22%] PASSED tests/system/aiplatform/test_tensorboard.py::TestTensorboard::test_write_tensorboard_scalar_data
tests/system/vertex_ray/test_cluster_management.py::TestClusterManagement::test_cluster_management[2.9]
[gw14] [ 23%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_batch_prediction_for_text_generation[rest]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_batch_prediction_for_textembedding[grpc]
[gw13] [ 23%] PASSED tests/system/aiplatform/test_pipeline_job.py::TestPipelineJob::test_add_pipeline_job_to_experiment
tests/system/aiplatform/test_prediction_cpr.py::TestPredictionCpr::test_build_cpr_model_upload_and_deploy
[gw14] [ 24%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_batch_prediction_for_textembedding[grpc]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_batch_prediction_for_textembedding[rest]
[gw14] [ 24%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_batch_prediction_for_textembedding[rest]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_batch_prediction_for_code_generation[grpc]
[gw11] [ 25%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_add_pipeline_job_to_experiment
tests/system/aiplatform/test_experiments.py::TestExperiments::test_get_experiments_df
[gw11] [ 25%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_get_experiments_df
tests/system/aiplatform/test_experiments.py::TestExperiments::test_get_experiments_df_include_time_series_false
[gw11] [ 25%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_get_experiments_df_include_time_series_false
tests/system/aiplatform/test_experiments.py::TestExperiments::test_delete_run_does_not_exist_raises_exception
[gw11] [ 26%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_delete_run_does_not_exist_raises_exception
tests/system/aiplatform/test_experiments.py::TestExperiments::test_delete_run_success
[gw14] [ 26%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_batch_prediction_for_code_generation[grpc]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_batch_prediction_for_code_generation[rest]
[gw11] [ 27%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_delete_run_success
tests/system/aiplatform/test_experiments.py::TestExperiments::test_reuse_run_success
[gw11] [ 27%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_reuse_run_success
tests/system/aiplatform/test_experiments.py::TestExperiments::test_delete_run_then_tensorboard_success
[gw11] [ 27%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_delete_run_then_tensorboard_success
tests/system/aiplatform/test_experiments.py::TestExperiments::test_delete_wout_backing_tensorboard_reuse_run_raises_exception
[gw11] [ 28%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_delete_wout_backing_tensorboard_reuse_run_raises_exception
tests/system/aiplatform/test_experiments.py::TestExperiments::test_delete_experiment_does_not_exist_raises_exception
[gw11] [ 28%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_delete_experiment_does_not_exist_raises_exception
tests/system/aiplatform/test_experiments.py::TestExperiments::test_init_associates_global_tensorboard_to_experiment
[gw11] [ 29%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_init_associates_global_tensorboard_to_experiment
tests/system/aiplatform/test_experiments.py::TestExperiments::test_get_backing_tensorboard_resource_returns_tensorboard
[gw11] [ 29%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_get_backing_tensorboard_resource_returns_tensorboard
tests/system/aiplatform/test_experiments.py::TestExperiments::test_get_backing_tensorboard_resource_returns_none
[gw11] [ 29%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_get_backing_tensorboard_resource_returns_none
tests/system/aiplatform/test_experiments.py::TestExperiments::test_delete_backing_tensorboard_experiment_run_success
[gw14] [ 30%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_batch_prediction_for_code_generation[rest]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_code_generation_streaming[grpc]
[gw11] [ 30%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_delete_backing_tensorboard_experiment_run_success
[gw14] [ 31%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_code_generation_streaming[grpc]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_code_generation_streaming[rest]
tests/system/vertexai/test_batch_prediction.py::TestBatchPrediction::test_batch_prediction_with_gcs_input
[gw14] [ 31%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_code_generation_streaming[rest]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_code_chat_model_send_message_streaming[grpc]
[gw14] [ 31%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_code_chat_model_send_message_streaming[grpc]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_code_chat_model_send_message_streaming[rest]
[gw14] [ 32%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_code_chat_model_send_message_streaming[rest]
tests/system/vertexai/test_prompts.py::TestPrompts::test_create_prompt_with_variables
[gw14] [ 32%] PASSED tests/system/vertexai/test_prompts.py::TestPrompts::test_create_prompt_with_variables
tests/system/vertexai/test_prompts.py::TestPrompts::test_create_prompt_with_function_calling
[gw14] [ 33%] PASSED tests/system/vertexai/test_prompts.py::TestPrompts::test_create_prompt_with_function_calling
tests/system/vertexai/test_prompts.py::TestPrompts::test_get_prompt_with_variables
[gw14] [ 33%] PASSED tests/system/vertexai/test_prompts.py::TestPrompts::test_get_prompt_with_variables
tests/system/vertexai/test_prompts.py::TestPrompts::test_get_prompt_with_function_calling
[gw14] [ 33%] PASSED tests/system/vertexai/test_prompts.py::TestPrompts::test_get_prompt_with_function_calling
tests/system/vertexai/test_reasoning_engines.py::TestReasoningEngines::test_langchain_template
[gw13] [ 34%] FAILED tests/system/aiplatform/test_prediction_cpr.py::TestPredictionCpr::test_build_cpr_model_upload_and_deploy
tests/system/aiplatform/test_private_endpoint.py::TestPrivateEndpoint::test_create_deploy_delete_private_endpoint
[gw8] [ 34%] PASSED tests/system/aiplatform/test_project_id_inference.py::TestProjectIDInference::test_project_id_inference
tests/system/aiplatform/test_telemetry.py::TestTelemetry::test_single_context_manager
[gw8] [ 35%] PASSED tests/system/aiplatform/test_telemetry.py::TestTelemetry::test_single_context_manager
tests/system/aiplatform/test_telemetry.py::TestTelemetry::test_nested_context_manager
[gw8] [ 35%] PASSED tests/system/aiplatform/test_telemetry.py::TestTelemetry::test_nested_context_manager
[gw1] [ 35%] PASSED tests/system/aiplatform/test_custom_job.py::TestCustomJob::test_from_local_script_prebuilt_container
tests/system/aiplatform/test_custom_job.py::TestCustomJob::test_from_local_script_custom_container
[gw12] [ 36%] PASSED tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_create_get_list_featurestore
tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_create_get_list_entity_types
[gw12] [ 36%] PASSED tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_create_get_list_entity_types
tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_create_get_list_features
[gw12] [ 37%] PASSED tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_create_get_list_features
tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_ingest_feature_values
[gw11] [ 37%] PASSED tests/system/vertexai/test_batch_prediction.py::TestBatchPrediction::test_batch_prediction_with_gcs_input
tests/system/vertexai/test_batch_prediction.py::TestBatchPrediction::test_batch_prediction_with_bq_input
[gw3] [ 37%] PASSED tests/system/aiplatform/test_dataset.py::TestDataset::test_get_new_dataset_and_import
tests/system/aiplatform/test_dataset.py::TestDataset::test_create_and_import_image_dataset
[gw14] [ 38%] PASSED tests/system/vertexai/test_reasoning_engines.py::TestReasoningEngines::test_langchain_template
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_local[_get_tokenizer_for_model_preview-gemini-1.0-pro-udhr-udhr-PROD_ENDPOINT]
[gw14] [ 38%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_local[_get_tokenizer_for_model_preview-gemini-1.0-pro-udhr-udhr-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_local[_get_tokenizer_for_model_preview-gemini-1.5-pro-udhr-udhr-PROD_ENDPOINT]
[gw13] [ 39%] PASSED tests/system/aiplatform/test_private_endpoint.py::TestPrivateEndpoint::test_create_deploy_delete_private_endpoint
tests/system/vertex_ray/test_ray_data.py::TestRayData::test_ray_data[2.9]
[gw14] [ 39%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_local[_get_tokenizer_for_model_preview-gemini-1.5-pro-udhr-udhr-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_local[_get_tokenizer_for_model_preview-gemini-1.5-flash-udhr-udhr-PROD_ENDPOINT]
[gw1] [ 39%] PASSED tests/system/aiplatform/test_custom_job.py::TestCustomJob::test_from_local_script_custom_container
tests/system/aiplatform/test_custom_job.py::TestCustomJob::test_from_local_script_enable_autolog_prebuilt_container
[gw14] [ 40%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_local[_get_tokenizer_for_model_preview-gemini-1.5-flash-udhr-udhr-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_local[_get_tokenizer_for_model_preview-gemini-1.5-flash-002-udhr-udhr-PROD_ENDPOINT]
[gw14] [ 40%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_local[_get_tokenizer_for_model_preview-gemini-1.5-flash-002-udhr-udhr-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_local[_get_tokenizer_for_model_preview-gemini-1.5-pro-002-udhr-udhr-PROD_ENDPOINT]
[gw11] [ 41%] PASSED tests/system/vertexai/test_batch_prediction.py::TestBatchPrediction::test_batch_prediction_with_bq_input
[gw14] [ 41%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_local[_get_tokenizer_for_model_preview-gemini-1.5-pro-002-udhr-udhr-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_local[get_tokenizer_for_model-gemini-1.0-pro-udhr-udhr-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_with_cached_content_from_text[grpc-PROD_ENDPOINT]
[gw14] [ 41%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_local[get_tokenizer_for_model-gemini-1.0-pro-udhr-udhr-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_local[get_tokenizer_for_model-gemini-1.5-pro-udhr-udhr-PROD_ENDPOINT]
[gw0] [ 42%] PASSED tests/system/vertex_ray/test_cluster_management.py::TestClusterManagement::test_cluster_management[2.9]
tests/system/vertex_ray/test_cluster_management.py::TestClusterManagement::test_cluster_management[2.33]
[gw14] [ 42%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_local[get_tokenizer_for_model-gemini-1.5-pro-udhr-udhr-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_local[get_tokenizer_for_model-gemini-1.5-flash-udhr-udhr-PROD_ENDPOINT]
[gw14] [ 43%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_local[get_tokenizer_for_model-gemini-1.5-flash-udhr-udhr-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_local[get_tokenizer_for_model-gemini-1.5-flash-002-udhr-udhr-PROD_ENDPOINT]
[gw14] [ 43%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_local[get_tokenizer_for_model-gemini-1.5-flash-002-udhr-udhr-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_local[get_tokenizer_for_model-gemini-1.5-pro-002-udhr-udhr-PROD_ENDPOINT]
[gw12] [ 43%] PASSED tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_ingest_feature_values
tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_batch_create_features
[gw12] [ 44%] PASSED tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_batch_create_features
tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_ingest_feature_values_from_df_using_feature_time_column_and_online_read_multiple_entities
[gw11] [ 44%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_with_cached_content_from_text[grpc-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_with_cached_content_from_text[rest-PROD_ENDPOINT]
[gw14] [ 45%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_local[get_tokenizer_for_model-gemini-1.5-pro-002-udhr-udhr-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_compute_tokens[_get_tokenizer_for_model_preview-gemini-1.0-pro-udhr-udhr-PROD_ENDPOINT]
[gw14] [ 45%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_compute_tokens[_get_tokenizer_for_model_preview-gemini-1.0-pro-udhr-udhr-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_compute_tokens[_get_tokenizer_for_model_preview-gemini-1.5-pro-udhr-udhr-PROD_ENDPOINT]
[gw11] [ 45%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_with_cached_content_from_text[rest-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_text[grpc-PROD_ENDPOINT]
[gw11] [ 46%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_text[grpc-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_text[rest-PROD_ENDPOINT]
[gw14] [ 46%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_compute_tokens[_get_tokenizer_for_model_preview-gemini-1.5-pro-udhr-udhr-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_compute_tokens[_get_tokenizer_for_model_preview-gemini-1.5-flash-udhr-udhr-PROD_ENDPOINT]
[gw11] [ 47%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_text[rest-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_latency[grpc-PROD_ENDPOINT]
[gw11] [ 47%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_latency[grpc-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_latency[rest-PROD_ENDPOINT]
[gw11] [ 47%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_latency[rest-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_async[grpc-PROD_ENDPOINT]
[gw11] [ 48%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_async[grpc-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_async[rest-PROD_ENDPOINT]
[gw11] [ 48%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_async[rest-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_streaming[grpc-PROD_ENDPOINT]
[gw11] [ 49%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_streaming[grpc-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_streaming[rest-PROD_ENDPOINT]
[gw11] [ 49%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_streaming[rest-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_streaming_async[grpc-PROD_ENDPOINT]
[gw11] [ 50%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_streaming_async[grpc-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_streaming_async[rest-PROD_ENDPOINT]
[gw11] [ 50%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_streaming_async[rest-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_with_parameters[grpc-PROD_ENDPOINT]
[gw11] [ 50%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_with_parameters[grpc-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_with_parameters[rest-PROD_ENDPOINT]
[gw11] [ 51%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_with_parameters[rest-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_with_gemini_15_parameters[grpc-PROD_ENDPOINT]
[gw14] [ 51%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_compute_tokens[_get_tokenizer_for_model_preview-gemini-1.5-flash-udhr-udhr-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_compute_tokens[_get_tokenizer_for_model_preview-gemini-1.5-flash-002-udhr-udhr-PROD_ENDPOINT]
[gw11] [ 52%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_with_gemini_15_parameters[grpc-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_with_gemini_15_parameters[rest-PROD_ENDPOINT]
[gw11] [ 52%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_with_gemini_15_parameters[rest-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_list_of_content_dict[grpc-PROD_ENDPOINT]
[gw2] [ 52%] PASSED tests/system/aiplatform/test_batch_prediction.py::TestBatchPredictionJob::test_model_monitoring
tests/system/aiplatform/test_model_evaluation.py::TestModelEvaluationJob::test_model_evaluate_custom_tabular_model
[gw11] [ 53%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_list_of_content_dict[grpc-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_list_of_content_dict[rest-PROD_ENDPOINT]
[gw11] [ 53%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_list_of_content_dict[rest-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_remote_image[grpc-PROD_ENDPOINT]
[gw11] [ 54%] SKIPPED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_remote_image[grpc-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_remote_image[rest-PROD_ENDPOINT]
[gw11] [ 54%] SKIPPED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_remote_image[rest-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_text_and_remote_image[grpc-PROD_ENDPOINT]
[gw11] [ 54%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_text_and_remote_image[grpc-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_text_and_remote_image[rest-PROD_ENDPOINT]
[gw11] [ 55%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_text_and_remote_image[rest-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_text_and_remote_video[grpc-PROD_ENDPOINT]
[gw11] [ 55%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_text_and_remote_video[grpc-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_text_and_remote_video[rest-PROD_ENDPOINT]
[gw11] [ 56%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_text_and_remote_video[rest-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_text_and_remote_audio[grpc-PROD_ENDPOINT]
[gw11] [ 56%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_text_and_remote_audio[grpc-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_text_and_remote_audio[rest-PROD_ENDPOINT]
[gw14] [ 56%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_compute_tokens[_get_tokenizer_for_model_preview-gemini-1.5-flash-002-udhr-udhr-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_compute_tokens[_get_tokenizer_for_model_preview-gemini-1.5-pro-002-udhr-udhr-PROD_ENDPOINT]
[gw10] [ 57%] PASSED tests/system/aiplatform/test_experiment_model.py::TestExperimentModel::test_deploy_model_with_cpu_container
tests/system/aiplatform/test_experiment_model.py::TestExperimentModel::test_deploy_model_with_gpu_container
[gw10] [ 57%] FAILED tests/system/aiplatform/test_experiment_model.py::TestExperimentModel::test_deploy_model_with_gpu_container
[gw11] [ 58%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_text_and_remote_audio[rest-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_grounding_google_search_retriever[grpc-PROD_ENDPOINT]
tests/system/aiplatform/test_vizier.py::TestVizier::test_vizier_lifecycle
[gw11] [ 58%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_grounding_google_search_retriever[grpc-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_grounding_google_search_retriever[rest-PROD_ENDPOINT]
[gw10] [ 58%] PASSED tests/system/aiplatform/test_vizier.py::TestVizier::test_vizier_lifecycle
tests/system/aiplatform/test_vizier.py::TestVizier::test_vizier_study_deletion
[gw10] [ 59%] PASSED tests/system/aiplatform/test_vizier.py::TestVizier::test_vizier_study_deletion
tests/system/aiplatform/test_vizier.py::TestVizier::test_vizier_trial_deletion
[gw10] [ 59%] PASSED tests/system/aiplatform/test_vizier.py::TestVizier::test_vizier_trial_deletion
[gw11] [ 60%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_grounding_google_search_retriever[rest-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_grounding_google_search_retriever_with_dynamic_retrieval[grpc-PROD_ENDPOINT]
[gw11] [ 60%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_grounding_google_search_retriever_with_dynamic_retrieval[grpc-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_grounding_google_search_retriever_with_dynamic_retrieval[rest-PROD_ENDPOINT]
[gw11] [ 60%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_grounding_google_search_retriever_with_dynamic_retrieval[rest-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_send_message_from_text[grpc-PROD_ENDPOINT]
[gw11] [ 61%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_send_message_from_text[grpc-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_send_message_from_text[rest-PROD_ENDPOINT]
[gw11] [ 61%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_send_message_from_text[rest-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_chat_function_calling[grpc-PROD_ENDPOINT]
[gw11] [ 62%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_chat_function_calling[grpc-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_chat_function_calling[rest-PROD_ENDPOINT]
[gw11] [ 62%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_chat_function_calling[rest-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_function_calling[grpc-PROD_ENDPOINT]
[gw11] [ 62%] FAILED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_function_calling[grpc-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_function_calling[rest-PROD_ENDPOINT]
[gw11] [ 63%] FAILED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_function_calling[rest-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_model_router[grpc-PROD_ENDPOINT]
[gw14] [ 63%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_compute_tokens[_get_tokenizer_for_model_preview-gemini-1.5-pro-002-udhr-udhr-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_compute_tokens[get_tokenizer_for_model-gemini-1.0-pro-udhr-udhr-PROD_ENDPOINT]
[gw11] [ 64%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_model_router[grpc-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_model_router[rest-PROD_ENDPOINT]
[gw1] [ 64%] PASSED tests/system/aiplatform/test_custom_job.py::TestCustomJob::test_from_local_script_enable_autolog_prebuilt_container
tests/system/aiplatform/test_custom_job.py::TestCustomJob::test_from_local_script_enable_autolog_custom_container
[gw11] [ 64%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_model_router[rest-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_chat_automatic_function_calling[grpc-PROD_ENDPOINT]
[gw11] [ 65%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_chat_automatic_function_calling[grpc-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_chat_automatic_function_calling[rest-PROD_ENDPOINT]
[gw11] [ 65%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_chat_automatic_function_calling[rest-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_additional_request_metadata[grpc-PROD_ENDPOINT]
[gw11] [ 66%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_additional_request_metadata[grpc-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_additional_request_metadata[rest-PROD_ENDPOINT]
[gw11] [ 66%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_additional_request_metadata[rest-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_compute_tokens_from_text[grpc-PROD_ENDPOINT]
[gw11] [ 66%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_compute_tokens_from_text[grpc-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_compute_tokens_from_text[rest-PROD_ENDPOINT]
[gw11] [ 67%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_compute_tokens_from_text[rest-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_count_tokens_from_text[grpc-PROD_ENDPOINT]
[gw11] [ 67%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_count_tokens_from_text[grpc-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_count_tokens_from_text[rest-PROD_ENDPOINT]
[gw11] [ 68%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_count_tokens_from_text[rest-PROD_ENDPOINT]
[gw14] [ 68%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_compute_tokens[get_tokenizer_for_model-gemini-1.0-pro-udhr-udhr-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_compute_tokens[get_tokenizer_for_model-gemini-1.5-pro-udhr-udhr-PROD_ENDPOINT]
[gw14] [ 68%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_compute_tokens[get_tokenizer_for_model-gemini-1.5-pro-udhr-udhr-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_compute_tokens[get_tokenizer_for_model-gemini-1.5-flash-udhr-udhr-PROD_ENDPOINT]
[gw14] [ 69%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_compute_tokens[get_tokenizer_for_model-gemini-1.5-flash-udhr-udhr-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_compute_tokens[get_tokenizer_for_model-gemini-1.5-flash-002-udhr-udhr-PROD_ENDPOINT]
[gw3] [ 69%] PASSED tests/system/aiplatform/test_dataset.py::TestDataset::test_create_and_import_image_dataset
tests/system/aiplatform/test_dataset.py::TestDataset::test_create_tabular_dataset
[gw3] [ 70%] PASSED tests/system/aiplatform/test_dataset.py::TestDataset::test_create_tabular_dataset
tests/system/aiplatform/test_dataset.py::TestDataset::test_create_tabular_dataset_from_dataframe
[gw14] [ 70%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_compute_tokens[get_tokenizer_for_model-gemini-1.5-flash-002-udhr-udhr-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_compute_tokens[get_tokenizer_for_model-gemini-1.5-pro-002-udhr-udhr-PROD_ENDPOINT]
[gw1] [ 70%] PASSED tests/system/aiplatform/test_custom_job.py::TestCustomJob::test_from_local_script_enable_autolog_custom_container
[gw3] [ 71%] PASSED tests/system/aiplatform/test_dataset.py::TestDataset::test_create_tabular_dataset_from_dataframe
tests/system/aiplatform/test_dataset.py::TestDataset::test_create_tabular_dataset_from_dataframe_with_provided_schema
[gw3] [ 71%] PASSED tests/system/aiplatform/test_dataset.py::TestDataset::test_create_tabular_dataset_from_dataframe_with_provided_schema
tests/system/aiplatform/test_dataset.py::TestDataset::test_create_time_series_dataset
[gw12] [ 72%] PASSED tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_ingest_feature_values_from_df_using_feature_time_column_and_online_read_multiple_entities
tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_ingest_feature_values_from_df_using_feature_time_datetime_and_online_read_single_entity
[gw3] [ 72%] PASSED tests/system/aiplatform/test_dataset.py::TestDataset::test_create_time_series_dataset
tests/system/aiplatform/test_dataset.py::TestDataset::test_export_data
[gw3] [ 72%] PASSED tests/system/aiplatform/test_dataset.py::TestDataset::test_export_data
tests/system/aiplatform/test_dataset.py::TestDataset::test_export_data_for_custom_training
[gw3] [ 73%] PASSED tests/system/aiplatform/test_dataset.py::TestDataset::test_export_data_for_custom_training
tests/system/aiplatform/test_dataset.py::TestDataset::test_update_dataset
[gw3] [ 73%] PASSED tests/system/aiplatform/test_dataset.py::TestDataset::test_update_dataset
[gw14] [ 74%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_compute_tokens[get_tokenizer_for_model-gemini-1.5-pro-002-udhr-udhr-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction[gemini-1.0-pro-PROD_ENDPOINT]
[gw14] [ 74%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction[gemini-1.0-pro-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction[gemini-1.5-pro-PROD_ENDPOINT]
[gw14] [ 75%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction[gemini-1.5-pro-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction[gemini-1.5-flash-PROD_ENDPOINT]
[gw14] [ 75%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction[gemini-1.5-flash-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction[gemini-1.5-flash-002-PROD_ENDPOINT]
[gw14] [ 75%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction[gemini-1.5-flash-002-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction[gemini-1.5-pro-002-PROD_ENDPOINT]
[gw14] [ 76%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction[gemini-1.5-pro-002-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction_is_function_call[gemini-1.0-pro-PROD_ENDPOINT]
[gw14] [ 76%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction_is_function_call[gemini-1.0-pro-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction_is_function_call[gemini-1.5-pro-PROD_ENDPOINT]
[gw14] [ 77%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction_is_function_call[gemini-1.5-pro-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction_is_function_call[gemini-1.5-flash-PROD_ENDPOINT]
[gw14] [ 77%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction_is_function_call[gemini-1.5-flash-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction_is_function_call[gemini-1.5-flash-002-PROD_ENDPOINT]
[gw14] [ 77%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction_is_function_call[gemini-1.5-flash-002-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction_is_function_call[gemini-1.5-pro-002-PROD_ENDPOINT]
[gw14] [ 78%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction_is_function_call[gemini-1.5-pro-002-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction_is_function_response[gemini-1.0-pro-PROD_ENDPOINT]
[gw14] [ 78%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction_is_function_response[gemini-1.0-pro-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction_is_function_response[gemini-1.5-pro-PROD_ENDPOINT]
[gw14] [ 79%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction_is_function_response[gemini-1.5-pro-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction_is_function_response[gemini-1.5-flash-PROD_ENDPOINT]
[gw14] [ 79%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction_is_function_response[gemini-1.5-flash-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction_is_function_response[gemini-1.5-flash-002-PROD_ENDPOINT]
[gw14] [ 79%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction_is_function_response[gemini-1.5-flash-002-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction_is_function_response[gemini-1.5-pro-002-PROD_ENDPOINT]
[gw14] [ 80%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction_is_function_response[gemini-1.5-pro-002-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_tool_is_function_declaration[gemini-1.0-pro-PROD_ENDPOINT]
[gw14] [ 80%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_tool_is_function_declaration[gemini-1.0-pro-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_tool_is_function_declaration[gemini-1.5-pro-PROD_ENDPOINT]
[gw14] [ 81%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_tool_is_function_declaration[gemini-1.5-pro-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_tool_is_function_declaration[gemini-1.5-flash-PROD_ENDPOINT]
[gw14] [ 81%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_tool_is_function_declaration[gemini-1.5-flash-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_tool_is_function_declaration[gemini-1.5-flash-002-PROD_ENDPOINT]
[gw14] [ 81%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_tool_is_function_declaration[gemini-1.5-flash-002-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_tool_is_function_declaration[gemini-1.5-pro-002-PROD_ENDPOINT]
[gw14] [ 82%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_tool_is_function_declaration[gemini-1.5-pro-002-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_content_is_function_call[gemini-1.0-pro-PROD_ENDPOINT]
[gw14] [ 82%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_content_is_function_call[gemini-1.0-pro-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_content_is_function_call[gemini-1.5-pro-PROD_ENDPOINT]
[gw14] [ 83%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_content_is_function_call[gemini-1.5-pro-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_content_is_function_call[gemini-1.5-flash-PROD_ENDPOINT]
[gw14] [ 83%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_content_is_function_call[gemini-1.5-flash-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_content_is_function_call[gemini-1.5-flash-002-PROD_ENDPOINT]
[gw14] [ 83%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_content_is_function_call[gemini-1.5-flash-002-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_content_is_function_call[gemini-1.5-pro-002-PROD_ENDPOINT]
[gw14] [ 84%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_content_is_function_call[gemini-1.5-pro-002-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_content_is_function_response[gemini-1.0-pro-PROD_ENDPOINT]
[gw14] [ 84%] FAILED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_content_is_function_response[gemini-1.0-pro-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_content_is_function_response[gemini-1.5-pro-PROD_ENDPOINT]
[gw14] [ 85%] FAILED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_content_is_function_response[gemini-1.5-pro-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_content_is_function_response[gemini-1.5-flash-PROD_ENDPOINT]
[gw14] [ 85%] FAILED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_content_is_function_response[gemini-1.5-flash-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_content_is_function_response[gemini-1.5-flash-002-PROD_ENDPOINT]
[gw14] [ 85%] FAILED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_content_is_function_response[gemini-1.5-flash-002-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_content_is_function_response[gemini-1.5-pro-002-PROD_ENDPOINT]
[gw14] [ 86%] FAILED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_content_is_function_response[gemini-1.5-pro-002-PROD_ENDPOINT]
[gw12] [ 86%] PASSED tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_ingest_feature_values_from_df_using_feature_time_datetime_and_online_read_single_entity
tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_write_features
[gw12] [ 87%] PASSED tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_write_features
tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_search_features
[gw12] [ 87%] PASSED tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_search_features
tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_batch_serve_to_df
[gw12] [ 87%] PASSED tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_batch_serve_to_df
tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_batch_serve_to_gcs
[gw12] [ 88%] PASSED tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_batch_serve_to_gcs
tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_batch_serve_to_bq
[gw12] [ 88%] PASSED tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_batch_serve_to_bq
tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_online_reads
[gw12] [ 89%] PASSED tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_online_reads
[gw2] [ 89%] PASSED tests/system/aiplatform/test_model_evaluation.py::TestModelEvaluationJob::test_model_evaluate_custom_tabular_model
[gw13] [ 89%] FAILED tests/system/vertex_ray/test_ray_data.py::TestRayData::test_ray_data[2.9]
tests/system/vertex_ray/test_ray_data.py::TestRayData::test_ray_data[2.33]
[gw13] [ 90%] FAILED tests/system/vertex_ray/test_ray_data.py::TestRayData::test_ray_data[2.33]
[gw0] [ 90%] FAILED tests/system/vertex_ray/test_cluster_management.py::TestClusterManagement::test_cluster_management[2.33]
tests/system/vertex_ray/test_job_submission_dashboard.py::TestJobSubmissionDashboard::test_job_submission_dashboard[2.9]
[gw0] [ 91%] FAILED tests/system/vertex_ray/test_job_submission_dashboard.py::TestJobSubmissionDashboard::test_job_submission_dashboard[2.9]
tests/system/vertex_ray/test_job_submission_dashboard.py::TestJobSubmissionDashboard::test_job_submission_dashboard[2.33]
[gw0] [ 91%] FAILED tests/system/vertex_ray/test_job_submission_dashboard.py::TestJobSubmissionDashboard::test_job_submission_dashboard[2.33]
[gw15] [ 91%] PASSED tests/system/aiplatform/test_matching_engine_index.py::TestMatchingEngine::test_create_get_list_matching_engine_index
tests/system/aiplatform/test_matching_engine_index.py::TestMatchingEngine::test_matching_engine_stream_index
[gw5] [ 92%] PASSED tests/system/aiplatform/test_e2e_forecasting.py::TestEndToEndForecasting2::test_end_to_end_forecasting[SequenceToSequencePlusForecastingTrainingJob]
tests/system/aiplatform/test_model_monitoring.py::TestModelDeploymentMonitoring::test_create_endpoint
[gw4] [ 92%] PASSED tests/system/aiplatform/test_e2e_forecasting.py::TestEndToEndForecasting1::test_end_to_end_forecasting[AutoMLForecastingTrainingJob]
tests/system/aiplatform/test_model_interactions.py::TestModelInteractions::test_prediction
[gw4] [ 93%] PASSED tests/system/aiplatform/test_model_interactions.py::TestModelInteractions::test_prediction
tests/system/aiplatform/test_model_interactions.py::TestModelInteractions::test_endpoint_predict_async
[gw4] [ 93%] PASSED tests/system/aiplatform/test_model_interactions.py::TestModelInteractions::test_endpoint_predict_async
[gw7] [ 93%] PASSED tests/system/aiplatform/test_e2e_forecasting.py::TestEndToEndForecasting4::test_end_to_end_forecasting[TimeSeriesDenseEncoderForecastingTrainingJob]
tests/system/aiplatform/test_model_version_management.py::TestVersionManagement::test_upload_deploy_manage_versioned_model
[gw7] [ 94%] PASSED tests/system/aiplatform/test_model_version_management.py::TestVersionManagement::test_upload_deploy_manage_versioned_model
[gw15] [ 94%] PASSED tests/system/aiplatform/test_matching_engine_index.py::TestMatchingEngine::test_matching_engine_stream_index
tests/system/aiplatform/test_pipeline_job_schedule.py::TestPipelineJobSchedule::test_create_get_pause_resume_update_list
[gw6] [ 95%] PASSED tests/system/aiplatform/test_e2e_forecasting.py::TestEndToEndForecasting3::test_end_to_end_forecasting[TemporalFusionTransformerForecastingTrainingJob]
tests/system/aiplatform/test_model_upload.py::TestModelUploadAndUpdate::test_upload_and_deploy_xgboost_model
[gw9] [ 95%] PASSED tests/system/aiplatform/test_e2e_tabular.py::TestEndToEndTabular::test_end_to_end_tabular
tests/system/aiplatform/test_persistent_resource.py::TestPersistentResource::test_create_persistent_resource
[gw9] [ 95%] FAILED tests/system/aiplatform/test_persistent_resource.py::TestPersistentResource::test_create_persistent_resource
[gw5] [ 96%] PASSED tests/system/aiplatform/test_model_monitoring.py::TestModelDeploymentMonitoring::test_create_endpoint
tests/system/aiplatform/test_model_monitoring.py::TestModelDeploymentMonitoring::test_mdm_two_models_one_valid_config
[gw5] [ 96%] FAILED tests/system/aiplatform/test_model_monitoring.py::TestModelDeploymentMonitoring::test_mdm_two_models_one_valid_config
tests/system/aiplatform/test_model_monitoring.py::TestModelDeploymentMonitoring::test_mdm_pause_and_update_config
[gw5] [ 97%] SKIPPED tests/system/aiplatform/test_model_monitoring.py::TestModelDeploymentMonitoring::test_mdm_pause_and_update_config
tests/system/aiplatform/test_model_monitoring.py::TestModelDeploymentMonitoring::test_mdm_two_models_two_valid_configs
[gw5] [ 97%] FAILED tests/system/aiplatform/test_model_monitoring.py::TestModelDeploymentMonitoring::test_mdm_two_models_two_valid_configs
tests/system/aiplatform/test_model_monitoring.py::TestModelDeploymentMonitoring::test_mdm_invalid_config_incorrect_model_id
[gw5] [ 97%] PASSED tests/system/aiplatform/test_model_monitoring.py::TestModelDeploymentMonitoring::test_mdm_invalid_config_incorrect_model_id
tests/system/aiplatform/test_model_monitoring.py::TestModelDeploymentMonitoring::test_mdm_invalid_config_xai
[gw5] [ 98%] PASSED tests/system/aiplatform/test_model_monitoring.py::TestModelDeploymentMonitoring::test_mdm_invalid_config_xai
tests/system/aiplatform/test_model_monitoring.py::TestModelDeploymentMonitoring::test_mdm_two_models_invalid_configs_xai
[gw5] [ 98%] PASSED tests/system/aiplatform/test_model_monitoring.py::TestModelDeploymentMonitoring::test_mdm_two_models_invalid_configs_xai
tests/system/aiplatform/test_model_monitoring.py::TestModelDeploymentMonitoring::test_mdm_notification_channel_alert_config
[gw5] [ 99%] FAILED tests/system/aiplatform/test_model_monitoring.py::TestModelDeploymentMonitoring::test_mdm_notification_channel_alert_config
[gw15] [ 99%] PASSED tests/system/aiplatform/test_pipeline_job_schedule.py::TestPipelineJobSchedule::test_create_get_pause_resume_update_list
[gw6] [100%] PASSED tests/system/aiplatform/test_model_upload.py::TestModelUploadAndUpdate::test_upload_and_deploy_xgboost_model
=================================== FAILURES ===================================
___________ TestExperimentModel.test_xgboost_booster_with_custom_uri ___________
[gw10] linux -- Python 3.10.15 /tmpfs/src/github/python-aiplatform/.nox/system-3-10/bin/python
args = (parent: "projects/ucaip-sample-tests/locations/us-central1/metadataStores/default"
artifact {
uri: "gs://test-verte... key: "frameworkName"
value {
string_value: "xgboost"
}
}
}
}
artifact_id: "xgb-booster"
,)
kwargs = {'metadata': [('x-goog-request-params', 'parent=projects/ucaip-sample-tests/locations/us-central1/metadataStores/defau...pc/1.51.3 gax/2.21.0 gapic/1.82.0+top_google_constructor_method+google.cloud.aiplatform.metadata._models.save_model')]}
@functools.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
try:
> return callable_(*args, **kwargs)
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/grpc_helpers.py:76:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
.nox/system-3-10/lib/python3.10/site-packages/grpc/_interceptor.py:247: in __call__
response, ignored_call = self._with_call(request,
.nox/system-3-10/lib/python3.10/site-packages/grpc/_interceptor.py:290: in _with_call
return call.result(), call
.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:343: in result
raise self
.nox/system-3-10/lib/python3.10/site-packages/grpc/_interceptor.py:274: in continuation
response, call = self._thunk(new_method).with_call(
.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:957: in with_call
return _end_unary_response_blocking(state, call, True, None)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
state =
call =
with_call = True, deadline = None
def _end_unary_response_blocking(state, call, with_call, deadline):
if state.code is grpc.StatusCode.OK:
if with_call:
rendezvous = _MultiThreadedRendezvous(state, call, None, deadline)
return state.response, rendezvous
else:
return state.response
else:
> raise _InactiveRpcError(state)
E grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
E status = StatusCode.ALREADY_EXISTS
E details = "Artifact with name projects/580378083368/locations/us-central1/metadataStores/default/artifacts/xgb-booster already exists"
E debug_error_string = "UNKNOWN:Error received from peer ipv4:142.251.188.95:443 {created_time:"2025-02-27T23:35:46.393471566+00:00", grpc_status:6, grpc_message:"Artifact with name projects/580378083368/locations/us-central1/metadataStores/default/artifacts/xgb-booster already exists"}"
E >
.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:849: _InactiveRpcError
The above exception was the direct cause of the following exception:
self =
shared_state = {'bucket': , 'resources': [}
def test_xgboost_booster_with_custom_uri(self, shared_state):
aiplatform.init(
project=e2e_base._PROJECT,
location=e2e_base._LOCATION,
staging_bucket=f"gs://{shared_state['staging_bucket_name']}",
)
train_x = np.array([[1, 1], [1, 2], [2, 2], [2, 3]])
train_y = np.array([1, 1, 0, 0])
dtrain = xgb.DMatrix(data=train_x, label=train_y)
booster = xgb.train(
params={"num_parallel_tree": 4, "subsample": 0.5, "num_class": 2},
dtrain=dtrain,
)
# Test save xgboost booster model with custom uri
uri = f"gs://{shared_state['staging_bucket_name']}/custom-uri"
> aiplatform.save_model(
model=booster,
artifact_id="xgb-booster",
uri=uri,
)
tests/system/aiplatform/test_experiment_model.py:112:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
google/cloud/aiplatform/metadata/_models.py:530: in save_model
model_artifact.create(
google/cloud/aiplatform/metadata/schema/base_artifact.py:186: in create
new_artifact_instance = artifact.Artifact.create(
google/cloud/aiplatform/metadata/artifact.py:354: in create
return cls._create(
google/cloud/aiplatform/metadata/artifact.py:204: in _create
resource = cls._create_resource(
google/cloud/aiplatform/metadata/artifact.py:113: in _create_resource
return client.create_artifact(
google/cloud/aiplatform_v1/services/metadata_service/client.py:1504: in create_artifact
response = rpc(
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/gapic_v1/method.py:131: in __call__
return wrapped_func(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
args = (parent: "projects/ucaip-sample-tests/locations/us-central1/metadataStores/default"
artifact {
uri: "gs://test-verte... key: "frameworkName"
value {
string_value: "xgboost"
}
}
}
}
artifact_id: "xgb-booster"
,)
kwargs = {'metadata': [('x-goog-request-params', 'parent=projects/ucaip-sample-tests/locations/us-central1/metadataStores/defau...pc/1.51.3 gax/2.21.0 gapic/1.82.0+top_google_constructor_method+google.cloud.aiplatform.metadata._models.save_model')]}
@functools.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
try:
return callable_(*args, **kwargs)
except grpc.RpcError as exc:
> raise exceptions.from_grpc_error(exc) from exc
E google.api_core.exceptions.AlreadyExists: 409 Artifact with name projects/580378083368/locations/us-central1/metadataStores/default/artifacts/xgb-booster already exists
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/grpc_helpers.py:78: AlreadyExists
_________ TestExperimentModel.test_xgboost_xgbmodel_with_custom_names __________
[gw10] linux -- Python 3.10.15 /tmpfs/src/github/python-aiplatform/.nox/system-3-10/bin/python
args = (parent: "projects/ucaip-sample-tests/locations/us-central1/metadataStores/default"
artifact {
display_name: "custom...y: "frameworkName"
value {
string_value: "xgboost"
}
}
}
}
artifact_id: "xgboost-xgbmodel"
,)
kwargs = {'metadata': [('x-goog-request-params', 'parent=projects/ucaip-sample-tests/locations/us-central1/metadataStores/defau...pc/1.51.3 gax/2.21.0 gapic/1.82.0+top_google_constructor_method+google.cloud.aiplatform.metadata._models.save_model')]}
@functools.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
try:
> return callable_(*args, **kwargs)
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/grpc_helpers.py:76:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
.nox/system-3-10/lib/python3.10/site-packages/grpc/_interceptor.py:247: in __call__
response, ignored_call = self._with_call(request,
.nox/system-3-10/lib/python3.10/site-packages/grpc/_interceptor.py:290: in _with_call
return call.result(), call
.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:343: in result
raise self
.nox/system-3-10/lib/python3.10/site-packages/grpc/_interceptor.py:274: in continuation
response, call = self._thunk(new_method).with_call(
.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:957: in with_call
return _end_unary_response_blocking(state, call, True, None)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
state =
call =
with_call = True, deadline = None
def _end_unary_response_blocking(state, call, with_call, deadline):
if state.code is grpc.StatusCode.OK:
if with_call:
rendezvous = _MultiThreadedRendezvous(state, call, None, deadline)
return state.response, rendezvous
else:
return state.response
else:
> raise _InactiveRpcError(state)
E grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
E status = StatusCode.ALREADY_EXISTS
E details = "Artifact with name projects/580378083368/locations/us-central1/metadataStores/default/artifacts/xgboost-xgbmodel already exists"
E debug_error_string = "UNKNOWN:Error received from peer ipv4:142.251.188.95:443 {grpc_message:"Artifact with name projects/580378083368/locations/us-central1/metadataStores/default/artifacts/xgboost-xgbmodel already exists", grpc_status:6, created_time:"2025-02-27T23:35:48.237873303+00:00"}"
E >
.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:849: _InactiveRpcError
The above exception was the direct cause of the following exception:
self =
shared_state = {'bucket': , 'resources': [}
def test_xgboost_xgbmodel_with_custom_names(self, shared_state):
aiplatform.init(
project=e2e_base._PROJECT,
location=e2e_base._LOCATION,
staging_bucket=f"gs://{shared_state['staging_bucket_name']}",
)
train_x = np.array([[1, 1], [1, 2], [2, 2], [2, 3]])
train_y = np.array([1, 1, 0, 0])
xgb_model = xgb.XGBClassifier()
xgb_model.fit(train_x, train_y)
# Test save xgboost xgbmodel with custom display_name
> aiplatform.save_model(
model=xgb_model,
artifact_id="xgboost-xgbmodel",
display_name="custom-experiment-model-name",
)
tests/system/aiplatform/test_experiment_model.py:165:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
google/cloud/aiplatform/metadata/_models.py:530: in save_model
model_artifact.create(
google/cloud/aiplatform/metadata/schema/base_artifact.py:186: in create
new_artifact_instance = artifact.Artifact.create(
google/cloud/aiplatform/metadata/artifact.py:354: in create
return cls._create(
google/cloud/aiplatform/metadata/artifact.py:204: in _create
resource = cls._create_resource(
google/cloud/aiplatform/metadata/artifact.py:113: in _create_resource
return client.create_artifact(
google/cloud/aiplatform_v1/services/metadata_service/client.py:1504: in create_artifact
response = rpc(
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/gapic_v1/method.py:131: in __call__
return wrapped_func(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
args = (parent: "projects/ucaip-sample-tests/locations/us-central1/metadataStores/default"
artifact {
display_name: "custom...y: "frameworkName"
value {
string_value: "xgboost"
}
}
}
}
artifact_id: "xgboost-xgbmodel"
,)
kwargs = {'metadata': [('x-goog-request-params', 'parent=projects/ucaip-sample-tests/locations/us-central1/metadataStores/defau...pc/1.51.3 gax/2.21.0 gapic/1.82.0+top_google_constructor_method+google.cloud.aiplatform.metadata._models.save_model')]}
@functools.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
try:
return callable_(*args, **kwargs)
except grpc.RpcError as exc:
> raise exceptions.from_grpc_error(exc) from exc
E google.api_core.exceptions.AlreadyExists: 409 Artifact with name projects/580378083368/locations/us-central1/metadataStores/default/artifacts/xgboost-xgbmodel already exists
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/grpc_helpers.py:78: AlreadyExists
____________ TestAutologging.test_autologging_with_autorun_creation ____________
[gw0] linux -- Python 3.10.15 /tmpfs/src/github/python-aiplatform/.nox/system-3-10/bin/python
self = Index(['experiment_name', 'run_name', 'run_type', 'state', 'param.copy_X',
'param.fit_intercept', 'param.positi...an_squared_error', 'metric.training_r2_score',
'metric.training_root_mean_squared_error'],
dtype='object')
key = 'metric.training_mae'
def get_loc(self, key):
"""
Get integer location, slice or boolean mask for requested label.
Parameters
----------
key : label
Returns
-------
int if unique index, slice if monotonic index, else mask
Examples
--------
>>> unique_index = pd.Index(list('abc'))
>>> unique_index.get_loc('b')
1
>>> monotonic_index = pd.Index(list('abbc'))
>>> monotonic_index.get_loc('b')
slice(1, 3, None)
>>> non_monotonic_index = pd.Index(list('abcb'))
>>> non_monotonic_index.get_loc('b')
array([False, True, False, True])
"""
casted_key = self._maybe_cast_indexer(key)
try:
> return self._engine.get_loc(casted_key)
.nox/system-3-10/lib/python3.10/site-packages/pandas/core/indexes/base.py:3805:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
index.pyx:167: in pandas._libs.index.IndexEngine.get_loc
???
index.pyx:196: in pandas._libs.index.IndexEngine.get_loc
???
pandas/_libs/hashtable_class_helper.pxi:7081: in pandas._libs.hashtable.PyObjectHashTable.get_item
???
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
> ???
E KeyError: 'metric.training_mae'
pandas/_libs/hashtable_class_helper.pxi:7089: KeyError
The above exception was the direct cause of the following exception:
self =
shared_state = {'bucket': , 'resources': [}
def test_autologging_with_autorun_creation(self, shared_state):
aiplatform.init(
project=e2e_base._PROJECT,
location=e2e_base._LOCATION,
experiment=self._experiment_autocreate_scikit,
experiment_tensorboard=self._backing_tensorboard,
)
shared_state["resources"] = [self._backing_tensorboard]
shared_state["resources"].append(
aiplatform.metadata.metadata._experiment_tracker.experiment
)
aiplatform.autolog()
build_and_train_test_scikit_model()
# Confirm sklearn run, params, and metrics exist
experiment_df_scikit = aiplatform.get_experiment_df()
assert experiment_df_scikit["run_name"][0].startswith("sklearn-")
assert experiment_df_scikit["param.fit_intercept"][0] == "True"
> assert experiment_df_scikit["metric.training_mae"][0] > 0
tests/system/aiplatform/test_autologging.py:162:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
.nox/system-3-10/lib/python3.10/site-packages/pandas/core/frame.py:4102: in __getitem__
indexer = self.columns.get_loc(key)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = Index(['experiment_name', 'run_name', 'run_type', 'state', 'param.copy_X',
'param.fit_intercept', 'param.positi...an_squared_error', 'metric.training_r2_score',
'metric.training_root_mean_squared_error'],
dtype='object')
key = 'metric.training_mae'
def get_loc(self, key):
"""
Get integer location, slice or boolean mask for requested label.
Parameters
----------
key : label
Returns
-------
int if unique index, slice if monotonic index, else mask
Examples
--------
>>> unique_index = pd.Index(list('abc'))
>>> unique_index.get_loc('b')
1
>>> monotonic_index = pd.Index(list('abbc'))
>>> monotonic_index.get_loc('b')
slice(1, 3, None)
>>> non_monotonic_index = pd.Index(list('abcb'))
>>> non_monotonic_index.get_loc('b')
array([False, True, False, True])
"""
casted_key = self._maybe_cast_indexer(key)
try:
return self._engine.get_loc(casted_key)
except KeyError as err:
if isinstance(casted_key, slice) or (
isinstance(casted_key, abc.Iterable)
and any(isinstance(x, slice) for x in casted_key)
):
raise InvalidIndexError(key)
> raise KeyError(key) from err
E KeyError: 'metric.training_mae'
.nox/system-3-10/lib/python3.10/site-packages/pandas/core/indexes/base.py:3812: KeyError
------------------------------ Captured log setup ------------------------------
INFO google.cloud.aiplatform.tensorboard.tensorboard_resource:base.py:85 Creating Tensorboard
INFO google.cloud.aiplatform.tensorboard.tensorboard_resource:base.py:88 Create Tensorboard backing LRO: projects/580378083368/locations/us-central1/tensorboards/5394023725962100736/operations/1965830136519458816
INFO google.cloud.aiplatform.tensorboard.tensorboard_resource:base.py:113 Tensorboard created. Resource name: projects/580378083368/locations/us-central1/tensorboards/5394023725962100736
INFO google.cloud.aiplatform.tensorboard.tensorboard_resource:base.py:114 To use this Tensorboard in another session:
INFO google.cloud.aiplatform.tensorboard.tensorboard_resource:base.py:115 tb = aiplatform.Tensorboard('projects/580378083368/locations/us-central1/tensorboards/5394023725962100736')
----------------------------- Captured stdout call -----------------------------
------------------------------ Captured log call -------------------------------
INFO google.cloud.aiplatform.metadata.experiment_resources:experiment_resources.py:797 Associating projects/580378083368/locations/us-central1/metadataStores/default/contexts/tmpvrtxsdk-e2e--451794e1-4b8f-4f12-8d8a-960e94d5d7b1-sklearn-2025-02-27-23-35-41-86f72 to Experiment: tmpvrtxsdk-e2e--451794e1-4b8f-4f12-8d8a-960e94d5d7b1
______ TestExperimentModel.test_tensorflow_keras_model_with_input_example ______
[gw10] linux -- Python 3.10.15 /tmpfs/src/github/python-aiplatform/.nox/system-3-10/bin/python
args = (parent: "projects/ucaip-sample-tests/locations/us-central1/metadataStores/default"
artifact {
uri: "gs://test-verte...key: "frameworkName"
value {
string_value: "tensorflow"
}
}
}
}
artifact_id: "keras-model"
,)
kwargs = {'metadata': [('x-goog-request-params', 'parent=projects/ucaip-sample-tests/locations/us-central1/metadataStores/defau...pc/1.51.3 gax/2.21.0 gapic/1.82.0+top_google_constructor_method+google.cloud.aiplatform.metadata._models.save_model')]}
@functools.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
try:
> return callable_(*args, **kwargs)
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/grpc_helpers.py:76:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
.nox/system-3-10/lib/python3.10/site-packages/grpc/_interceptor.py:247: in __call__
response, ignored_call = self._with_call(request,
.nox/system-3-10/lib/python3.10/site-packages/grpc/_interceptor.py:290: in _with_call
return call.result(), call
.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:343: in result
raise self
.nox/system-3-10/lib/python3.10/site-packages/grpc/_interceptor.py:274: in continuation
response, call = self._thunk(new_method).with_call(
.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:957: in with_call
return _end_unary_response_blocking(state, call, True, None)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
state =
call =
with_call = True, deadline = None
def _end_unary_response_blocking(state, call, with_call, deadline):
if state.code is grpc.StatusCode.OK:
if with_call:
rendezvous = _MultiThreadedRendezvous(state, call, None, deadline)
return state.response, rendezvous
else:
return state.response
else:
> raise _InactiveRpcError(state)
E grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
E status = StatusCode.ALREADY_EXISTS
E details = "Artifact with name projects/580378083368/locations/us-central1/metadataStores/default/artifacts/keras-model already exists"
E debug_error_string = "UNKNOWN:Error received from peer ipv4:142.251.188.95:443 {created_time:"2025-02-27T23:35:58.388488812+00:00", grpc_status:6, grpc_message:"Artifact with name projects/580378083368/locations/us-central1/metadataStores/default/artifacts/keras-model already exists"}"
E >
.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:849: _InactiveRpcError
The above exception was the direct cause of the following exception:
self =
shared_state = {'bucket': , 'resources': [}
def test_tensorflow_keras_model_with_input_example(self, shared_state):
aiplatform.init(
project=e2e_base._PROJECT,
location=e2e_base._LOCATION,
staging_bucket=f"gs://{shared_state['staging_bucket_name']}",
)
train_x = np.random.random((100, 2))
train_y = np.random.random((100, 1))
model = tf.keras.Sequential(
[tf.keras.layers.Dense(5, input_shape=(2,)), tf.keras.layers.Softmax()]
)
model.compile(optimizer="adam", loss="mean_squared_error")
model.fit(train_x, train_y)
# Test save tf.keras model with input example
> aiplatform.save_model(
model=model,
artifact_id="keras-model",
input_example=train_x,
)
tests/system/aiplatform/test_experiment_model.py:225:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
google/cloud/aiplatform/metadata/_models.py:530: in save_model
model_artifact.create(
google/cloud/aiplatform/metadata/schema/base_artifact.py:186: in create
new_artifact_instance = artifact.Artifact.create(
google/cloud/aiplatform/metadata/artifact.py:354: in create
return cls._create(
google/cloud/aiplatform/metadata/artifact.py:204: in _create
resource = cls._create_resource(
google/cloud/aiplatform/metadata/artifact.py:113: in _create_resource
return client.create_artifact(
google/cloud/aiplatform_v1/services/metadata_service/client.py:1504: in create_artifact
response = rpc(
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/gapic_v1/method.py:131: in __call__
return wrapped_func(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
args = (parent: "projects/ucaip-sample-tests/locations/us-central1/metadataStores/default"
artifact {
uri: "gs://test-verte...key: "frameworkName"
value {
string_value: "tensorflow"
}
}
}
}
artifact_id: "keras-model"
,)
kwargs = {'metadata': [('x-goog-request-params', 'parent=projects/ucaip-sample-tests/locations/us-central1/metadataStores/defau...pc/1.51.3 gax/2.21.0 gapic/1.82.0+top_google_constructor_method+google.cloud.aiplatform.metadata._models.save_model')]}
@functools.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
try:
return callable_(*args, **kwargs)
except grpc.RpcError as exc:
> raise exceptions.from_grpc_error(exc) from exc
E google.api_core.exceptions.AlreadyExists: 409 Artifact with name projects/580378083368/locations/us-central1/metadataStores/default/artifacts/keras-model already exists
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/grpc_helpers.py:78: AlreadyExists
----------------------------- Captured stdout call -----------------------------
1/4 [======>.......................] - ETA: 2s - loss: 0.13644/4 [==============================] - 1s 3ms/step - loss: 0.1722
________ TestExperimentModel.test_tensorflow_module_with_gpu_container _________
[gw10] linux -- Python 3.10.15 /tmpfs/src/github/python-aiplatform/.nox/system-3-10/bin/python
args = (parent: "projects/ucaip-sample-tests/locations/us-central1/metadataStores/default"
artifact {
uri: "gs://test-verte... key: "frameworkName"
value {
string_value: "tensorflow"
}
}
}
}
artifact_id: "tf-module"
,)
kwargs = {'metadata': [('x-goog-request-params', 'parent=projects/ucaip-sample-tests/locations/us-central1/metadataStores/defau...pc/1.51.3 gax/2.21.0 gapic/1.82.0+top_google_constructor_method+google.cloud.aiplatform.metadata._models.save_model')]}
@functools.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
try:
> return callable_(*args, **kwargs)
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/grpc_helpers.py:76:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
.nox/system-3-10/lib/python3.10/site-packages/grpc/_interceptor.py:247: in __call__
response, ignored_call = self._with_call(request,
.nox/system-3-10/lib/python3.10/site-packages/grpc/_interceptor.py:290: in _with_call
return call.result(), call
.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:343: in result
raise self
.nox/system-3-10/lib/python3.10/site-packages/grpc/_interceptor.py:274: in continuation
response, call = self._thunk(new_method).with_call(
.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:957: in with_call
return _end_unary_response_blocking(state, call, True, None)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
state =
call =
with_call = True, deadline = None
def _end_unary_response_blocking(state, call, with_call, deadline):
if state.code is grpc.StatusCode.OK:
if with_call:
rendezvous = _MultiThreadedRendezvous(state, call, None, deadline)
return state.response, rendezvous
else:
return state.response
else:
> raise _InactiveRpcError(state)
E grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
E status = StatusCode.ALREADY_EXISTS
E details = "Artifact with name projects/580378083368/locations/us-central1/metadataStores/default/artifacts/tf-module already exists"
E debug_error_string = "UNKNOWN:Error received from peer ipv4:108.177.98.95:443 {grpc_message:"Artifact with name projects/580378083368/locations/us-central1/metadataStores/default/artifacts/tf-module already exists", grpc_status:6, created_time:"2025-02-27T23:36:07.159474185+00:00"}"
E >
.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:849: _InactiveRpcError
The above exception was the direct cause of the following exception:
self =
shared_state = {'bucket': , 'resources': [}
def test_tensorflow_module_with_gpu_container(self, shared_state):
aiplatform.init(
project=e2e_base._PROJECT,
location=e2e_base._LOCATION,
staging_bucket=f"gs://{shared_state['staging_bucket_name']}",
)
class Adder(tf.Module):
@tf.function(
input_signature=[
tf.TensorSpec(
shape=[
2,
],
dtype=tf.float32,
)
]
)
def add(self, x):
return x + x
model = Adder()
# Test save tf.Module model
> aiplatform.save_model(model, "tf-module")
tests/system/aiplatform/test_experiment_model.py:293:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
google/cloud/aiplatform/metadata/_models.py:530: in save_model
model_artifact.create(
google/cloud/aiplatform/metadata/schema/base_artifact.py:186: in create
new_artifact_instance = artifact.Artifact.create(
google/cloud/aiplatform/metadata/artifact.py:354: in create
return cls._create(
google/cloud/aiplatform/metadata/artifact.py:204: in _create
resource = cls._create_resource(
google/cloud/aiplatform/metadata/artifact.py:113: in _create_resource
return client.create_artifact(
google/cloud/aiplatform_v1/services/metadata_service/client.py:1504: in create_artifact
response = rpc(
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/gapic_v1/method.py:131: in __call__
return wrapped_func(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
args = (parent: "projects/ucaip-sample-tests/locations/us-central1/metadataStores/default"
artifact {
uri: "gs://test-verte... key: "frameworkName"
value {
string_value: "tensorflow"
}
}
}
}
artifact_id: "tf-module"
,)
kwargs = {'metadata': [('x-goog-request-params', 'parent=projects/ucaip-sample-tests/locations/us-central1/metadataStores/defau...pc/1.51.3 gax/2.21.0 gapic/1.82.0+top_google_constructor_method+google.cloud.aiplatform.metadata._models.save_model')]}
@functools.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
try:
return callable_(*args, **kwargs)
except grpc.RpcError as exc:
> raise exceptions.from_grpc_error(exc) from exc
E google.api_core.exceptions.AlreadyExists: 409 Artifact with name projects/580378083368/locations/us-central1/metadataStores/default/artifacts/tf-module already exists
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/grpc_helpers.py:78: AlreadyExists
___________ TestPredictionCpr.test_build_cpr_model_upload_and_deploy ___________
[gw13] linux -- Python 3.10.15 /tmpfs/src/github/python-aiplatform/.nox/system-3-10/bin/python
self =
shared_state = {}
caplog = <_pytest.logging.LogCaptureFixture object at 0x7f8e52c0aaa0>
def test_build_cpr_model_upload_and_deploy(self, shared_state, caplog):
"""Creates a CPR model from custom predictor, uploads it and deploys."""
caplog.set_level(logging.INFO)
aiplatform.init(project=e2e_base._PROJECT, location=e2e_base._LOCATION)
local_model = LocalModel.build_cpr_model(
_USER_CODE_DIR,
_IMAGE_URI,
predictor=SklearnPredictor,
requirements_path=os.path.join(_USER_CODE_DIR, _REQUIREMENTS_FILE),
)
with local_model.deploy_to_local_endpoint(
artifact_uri=_LOCAL_MODEL_DIR,
) as local_endpoint:
local_predict_response = local_endpoint.predict(
request=f'{{"instances": {_PREDICTION_INPUT}}}',
headers={"Content-Type": "application/json"},
)
assert len(json.loads(local_predict_response.content)["predictions"]) == 1
interactive_local_endpoint = local_model.deploy_to_local_endpoint(
artifact_uri=_LOCAL_MODEL_DIR,
)
interactive_local_endpoint.serve()
interactive_local_predict_response = interactive_local_endpoint.predict(
request=f'{{"instances": {_PREDICTION_INPUT}}}',
headers={"Content-Type": "application/json"},
)
interactive_local_endpoint.stop()
assert (
len(json.loads(interactive_local_predict_response.content)["predictions"])
== 1
)
# Configure docker.
logging.info(
subprocess.run(["gcloud", "auth", "configure-docker"], capture_output=True)
)
> local_model.push_image()
tests/system/aiplatform/test_prediction_cpr.py:94:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
google/cloud/aiplatform/prediction/local_model.py:612: in push_image
errors.raise_docker_error_with_command(command, return_code)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
command = ['docker', 'push', 'gcr.io/ucaip-sample-tests/prediction-cpr/sklearn:20250227_233526']
return_code = 1
def raise_docker_error_with_command(command: List[str], return_code: int) -> NoReturn:
"""Raises DockerError with the given command and return code.
Args:
command (List(str)):
Required. The docker command that fails.
return_code (int):
Required. The return code from the command.
Raises:
DockerError which error message populated by the given command and return code.
"""
error_msg = textwrap.dedent(
"""
Docker failed with error code {code}.
Command: {cmd}
""".format(
code=return_code, cmd=" ".join(command)
)
)
> raise DockerError(error_msg, command, return_code)
E google.cloud.aiplatform.docker_utils.errors.DockerError: ('\nDocker failed with error code 1.\nCommand: docker push gcr.io/ucaip-sample-tests/prediction-cpr/sklearn:20250227_233526\n', ['docker', 'push', 'gcr.io/ucaip-sample-tests/prediction-cpr/sklearn:20250227_233526'], 1)
google/cloud/aiplatform/docker_utils/errors.py:60: DockerError
------------------------------ Captured log call -------------------------------
INFO google.cloud.aiplatform.docker_utils.build:build.py:531 Running command: docker build -t gcr.io/ucaip-sample-tests/prediction-cpr/sklearn:20250227_233526 --rm -f- /tmpfs/src/github/python-aiplatform/tests/system/aiplatform/test_resources/cpr_user_code
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Install the buildx component to build images with BuildKit:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 https://docs.docker.com/go/buildx/
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Sending build context to Docker daemon 11.31kB
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Step 1/14 : FROM python:3.10
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 3.10: Pulling from library/python
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 155ad54a8b28: Pulling fs layer
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 8031108f3cda: Pulling fs layer
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 1d281e50d3e4: Pulling fs layer
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 447713e77b4f: Pulling fs layer
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 a6c2fd51c72c: Pulling fs layer
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 2268f82e627e: Pulling fs layer
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 7fda9d093afe: Pulling fs layer
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 447713e77b4f: Waiting
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 a6c2fd51c72c: Waiting
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 2268f82e627e: Waiting
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 7fda9d093afe: Waiting
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 8031108f3cda: Verifying Checksum
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 8031108f3cda: Download complete
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 155ad54a8b28: Verifying Checksum
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 155ad54a8b28: Download complete
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 1d281e50d3e4: Download complete
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 a6c2fd51c72c: Verifying Checksum
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 a6c2fd51c72c: Download complete
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 2268f82e627e: Verifying Checksum
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 2268f82e627e: Download complete
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 7fda9d093afe: Verifying Checksum
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 7fda9d093afe: Download complete
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 447713e77b4f: Verifying Checksum
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 447713e77b4f: Download complete
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 155ad54a8b28: Pull complete
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 8031108f3cda: Pull complete
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 1d281e50d3e4: Pull complete
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 447713e77b4f: Pull complete
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 a6c2fd51c72c: Pull complete
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 2268f82e627e: Pull complete
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 7fda9d093afe: Pull complete
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Digest: sha256:e70cd7b54564482c0dee8cd6d8e314450aac59ea0ff669ffa715207ea0e04fa6
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Status: Downloaded newer image for python:3.10
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> e83a01774710
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Step 2/14 : ENV PYTHONDONTWRITEBYTECODE=1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> Running in bd432ce2d4e9
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Removing intermediate container bd432ce2d4e9
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> 24464a65023e
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Step 3/14 : EXPOSE 8080
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> Running in 5e0d47c9c68a
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Removing intermediate container 5e0d47c9c68a
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> eb56ee3a6db2
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Step 4/14 : ENTRYPOINT ["python", "-m", "google.cloud.aiplatform.prediction.model_server"]
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> Running in d63e8d275002
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Removing intermediate container d63e8d275002
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> 1f6b24bd94de
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Step 5/14 : RUN mkdir -m 777 -p /usr/app /home
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> Running in d00ca8d6782b
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Removing intermediate container d00ca8d6782b
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> c6114d64d134
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Step 6/14 : WORKDIR /usr/app
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> Running in ff8e9f247359
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Removing intermediate container ff8e9f247359
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> 977a68b61c9f
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Step 7/14 : ENV HOME=/home
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> Running in f76020a49a58
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Removing intermediate container f76020a49a58
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> 7dfdffe07574
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Step 8/14 : RUN pip install --no-cache-dir --force-reinstall 'google-cloud-aiplatform[prediction]>=1.27.0'
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> Running in 0949a860b436
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting google-cloud-aiplatform[prediction]>=1.27.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading google_cloud_aiplatform-1.82.0-py2.py3-none-any.whl (7.3 MB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 7.3/7.3 MB 29.1 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting google-auth<3.0.0dev,>=2.14.1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading google_auth-2.38.0-py2.py3-none-any.whl (210 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 210.8/210.8 kB 220.0 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting shapely<3.0.0dev
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading shapely-2.0.7-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.5 MB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.5/2.5 MB 77.2 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting docstring-parser<1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading docstring_parser-0.16-py3-none-any.whl (36 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting pydantic<3
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading pydantic-2.10.6-py3-none-any.whl (431 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 431.7/431.7 kB 127.7 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting google-cloud-bigquery!=3.20.0,<4.0.0dev,>=1.15.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading google_cloud_bigquery-3.30.0-py2.py3-none-any.whl (247 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 247.9/247.9 kB 239.3 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting proto-plus<2.0.0dev,>=1.22.3
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading proto_plus-1.26.0-py3-none-any.whl (50 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 50.2/50.2 kB 187.4 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting typing-extensions
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading typing_extensions-4.12.2-py3-none-any.whl (37 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting packaging>=14.3
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading packaging-24.2-py3-none-any.whl (65 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 65.5/65.5 kB 220.7 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting google-cloud-resource-manager<3.0.0dev,>=1.3.3
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading google_cloud_resource_manager-1.14.1-py2.py3-none-any.whl (392 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 392.3/392.3 kB 187.8 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting google-api-core[grpc]!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.4.*,!=2.5.*,!=2.6.*,!=2.7.*,<3.0.0dev,>=1.34.1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading google_api_core-2.24.1-py3-none-any.whl (160 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 160.1/160.1 kB 242.8 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting protobuf!=4.21.0,!=4.21.1,!=4.21.2,!=4.21.3,!=4.21.4,!=4.21.5,<6.0.0dev,>=3.20.2
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading protobuf-5.29.3-cp38-abi3-manylinux2014_x86_64.whl (319 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 319.7/319.7 kB 237.2 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting google-cloud-storage<3.0.0dev,>=1.32.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading google_cloud_storage-2.19.0-py2.py3-none-any.whl (131 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 131.8/131.8 kB 242.5 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting uvicorn[standard]>=0.16.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading uvicorn-0.34.0-py3-none-any.whl (62 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 62.3/62.3 kB 192.8 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting starlette>=0.17.1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading starlette-0.46.0-py3-none-any.whl (71 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 72.0/72.0 kB 207.3 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting httpx<0.25.0,>=0.23.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading httpx-0.24.1-py3-none-any.whl (75 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 75.4/75.4 kB 91.3 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting fastapi<=0.114.0,>=0.71.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading fastapi-0.114.0-py3-none-any.whl (94 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 94.0/94.0 kB 223.9 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting docker>=5.0.3
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading docker-7.1.0-py3-none-any.whl (147 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 147.8/147.8 kB 240.2 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting requests>=2.26.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading requests-2.32.3-py3-none-any.whl (64 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 64.9/64.9 kB 221.7 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting urllib3>=1.26.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading urllib3-2.3.0-py3-none-any.whl (128 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 128.4/128.4 kB 234.9 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting starlette>=0.17.1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading starlette-0.38.6-py3-none-any.whl (71 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 71.5/71.5 kB 223.5 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting googleapis-common-protos<2.0.dev0,>=1.56.2
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading googleapis_common_protos-1.68.0-py2.py3-none-any.whl (164 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 165.0/165.0 kB 245.2 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting grpcio-status<2.0.dev0,>=1.33.2
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading grpcio_status-1.70.0-py3-none-any.whl (14 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting grpcio<2.0dev,>=1.33.2
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading grpcio-1.70.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (5.9 MB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 5.9/5.9 MB 107.8 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting cachetools<6.0,>=2.0.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading cachetools-5.5.2-py3-none-any.whl (10 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting pyasn1-modules>=0.2.1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading pyasn1_modules-0.4.1-py3-none-any.whl (181 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 181.5/181.5 kB 243.4 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting rsa<5,>=3.1.4
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading rsa-4.9-py3-none-any.whl (34 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting python-dateutil<3.0dev,>=2.7.3
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading python_dateutil-2.9.0.post0-py2.py3-none-any.whl (229 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 229.9/229.9 kB 248.7 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting google-cloud-core<3.0.0dev,>=2.4.1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading google_cloud_core-2.4.2-py2.py3-none-any.whl (29 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting google-resumable-media<3.0dev,>=2.0.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading google_resumable_media-2.7.2-py2.py3-none-any.whl (81 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 81.3/81.3 kB 54.4 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting grpc-google-iam-v1<1.0.0dev,>=0.14.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading grpc_google_iam_v1-0.14.0-py2.py3-none-any.whl (27 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting google-crc32c<2.0dev,>=1.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading google_crc32c-1.6.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (37 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting httpcore<0.18.0,>=0.15.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading httpcore-0.17.3-py3-none-any.whl (74 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 74.5/74.5 kB 215.5 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting idna
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading idna-3.10-py3-none-any.whl (70 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 70.4/70.4 kB 224.0 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting sniffio
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading sniffio-1.3.1-py3-none-any.whl (10 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting certifi
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading certifi-2025.1.31-py3-none-any.whl (166 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 166.4/166.4 kB 242.9 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting pydantic-core==2.27.2
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading pydantic_core-2.27.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.0 MB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.0/2.0 MB 150.6 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting annotated-types>=0.6.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading annotated_types-0.7.0-py3-none-any.whl (13 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting numpy<3,>=1.14
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading numpy-2.2.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (16.4 MB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 16.4/16.4 MB 222.6 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting anyio<5,>=3.4.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading anyio-4.8.0-py3-none-any.whl (96 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 96.0/96.0 kB 153.7 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting click>=7.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading click-8.1.8-py3-none-any.whl (98 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 98.2/98.2 kB 232.9 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting h11>=0.8
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading h11-0.14.0-py3-none-any.whl (58 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 58.3/58.3 kB 208.5 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting websockets>=10.4
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading websockets-15.0-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (180 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 180.9/180.9 kB 239.1 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting uvloop!=0.15.0,!=0.15.1,>=0.14.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading uvloop-0.21.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (3.8 MB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.8/3.8 MB 234.1 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting pyyaml>=5.1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading PyYAML-6.0.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (751 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 751.2/751.2 kB 252.3 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting python-dotenv>=0.13
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading python_dotenv-1.0.1-py3-none-any.whl (19 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting watchfiles>=0.13
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading watchfiles-1.0.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (452 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 452.9/452.9 kB 261.4 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting httptools>=0.6.3
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading httptools-0.6.4-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (442 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 442.1/442.1 kB 249.6 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting exceptiongroup>=1.0.2
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading exceptiongroup-1.2.2-py3-none-any.whl (16 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting pyasn1<0.7.0,>=0.4.6
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading pyasn1-0.6.1-py3-none-any.whl (83 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 83.1/83.1 kB 214.6 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting six>=1.5
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading six-1.17.0-py2.py3-none-any.whl (11 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting charset-normalizer<4,>=2
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading charset_normalizer-3.4.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (146 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 146.1/146.1 kB 239.1 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Installing collected packages: websockets, uvloop, urllib3, typing-extensions, sniffio, six, pyyaml, python-dotenv, pyasn1, protobuf, packaging, numpy, idna, httptools, h11, grpcio, google-crc32c, exceptiongroup, docstring-parser, click, charset-normalizer, certifi, cachetools, annotated-types, uvicorn, shapely, rsa, requests, python-dateutil, pydantic-core, pyasn1-modules, proto-plus, googleapis-common-protos, google-resumable-media, anyio, watchfiles, starlette, pydantic, httpcore, grpcio-status, google-auth, docker, httpx, grpc-google-iam-v1, google-api-core, fastapi, google-cloud-core, google-cloud-storage, google-cloud-resource-manager, google-cloud-bigquery, google-cloud-aiplatform
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully installed annotated-types-0.7.0 anyio-4.8.0 cachetools-5.5.2 certifi-2025.1.31 charset-normalizer-3.4.1 click-8.1.8 docker-7.1.0 docstring-parser-0.16 exceptiongroup-1.2.2 fastapi-0.114.0 google-api-core-2.24.1 google-auth-2.38.0 google-cloud-aiplatform-1.82.0 google-cloud-bigquery-3.30.0 google-cloud-core-2.4.2 google-cloud-resource-manager-1.14.1 google-cloud-storage-2.19.0 google-crc32c-1.6.0 google-resumable-media-2.7.2 googleapis-common-protos-1.68.0 grpc-google-iam-v1-0.14.0 grpcio-1.70.0 grpcio-status-1.70.0 h11-0.14.0 httpcore-0.17.3 httptools-0.6.4 httpx-0.24.1 idna-3.10 numpy-2.2.3 packaging-24.2 proto-plus-1.26.0 protobuf-5.29.3 pyasn1-0.6.1 pyasn1-modules-0.4.1 pydantic-2.10.6 pydantic-core-2.27.2 python-dateutil-2.9.0.post0 python-dotenv-1.0.1 pyyaml-6.0.2 requests-2.32.3 rsa-4.9 shapely-2.0.7 six-1.17.0 sniffio-1.3.1 starlette-0.38.6 typing-extensions-4.12.2 urllib3-2.3.0 uvicorn-0.34.0 uvloop-0.21.0 watchfiles-1.0.4 websockets-15.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 [notice] A new release of pip is available: 23.0.1 -> 25.0.1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 [notice] To update, run: pip install --upgrade pip
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Removing intermediate container 0949a860b436
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> 7d2a228f53e7
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Step 9/14 : ENV HANDLER_MODULE=google.cloud.aiplatform.prediction.handler
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> Running in f59da19f59ab
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Removing intermediate container f59da19f59ab
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> 61b779bda9df
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Step 10/14 : ENV HANDLER_CLASS=PredictionHandler
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> Running in b2c8ae991789
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Removing intermediate container b2c8ae991789
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> a7466780df15
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Step 11/14 : ENV PREDICTOR_MODULE=predictor
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> Running in 638ac5d06bd8
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Removing intermediate container 638ac5d06bd8
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> 48d01d5207b8
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Step 12/14 : ENV PREDICTOR_CLASS=SklearnPredictor
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> Running in c603bc918ba6
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Removing intermediate container c603bc918ba6
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> f83a34e49d02
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Step 13/14 : COPY [".", "."]
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> 6ecfd1b36525
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Step 14/14 : RUN pip install --no-cache-dir --force-reinstall -r requirements.txt
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> Running in 5c16d7912b65
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting scikit-learn
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading scikit_learn-1.6.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (13.5 MB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 13.5/13.5 MB 48.9 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting google-cloud-aiplatform[prediction]
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading google_cloud_aiplatform-1.82.0-py2.py3-none-any.whl (7.3 MB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 7.3/7.3 MB 104.0 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting scipy>=1.6.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading scipy-1.15.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (37.6 MB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 37.6/37.6 MB 224.4 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting threadpoolctl>=3.1.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading threadpoolctl-3.5.0-py3-none-any.whl (18 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting joblib>=1.2.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading joblib-1.4.2-py3-none-any.whl (301 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 301.8/301.8 kB 250.8 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting numpy>=1.19.5
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading numpy-2.2.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (16.4 MB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 16.4/16.4 MB 183.6 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting google-auth<3.0.0dev,>=2.14.1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading google_auth-2.38.0-py2.py3-none-any.whl (210 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 210.8/210.8 kB 247.2 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting google-cloud-bigquery!=3.20.0,<4.0.0dev,>=1.15.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading google_cloud_bigquery-3.30.0-py2.py3-none-any.whl (247 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 247.9/247.9 kB 239.2 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting typing-extensions
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading typing_extensions-4.12.2-py3-none-any.whl (37 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting pydantic<3
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading pydantic-2.10.6-py3-none-any.whl (431 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 431.7/431.7 kB 250.8 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting shapely<3.0.0dev
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading shapely-2.0.7-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.5 MB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.5/2.5 MB 206.3 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting proto-plus<2.0.0dev,>=1.22.3
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading proto_plus-1.26.0-py3-none-any.whl (50 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 50.2/50.2 kB 193.0 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting google-cloud-resource-manager<3.0.0dev,>=1.3.3
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading google_cloud_resource_manager-1.14.1-py2.py3-none-any.whl (392 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 392.3/392.3 kB 232.3 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting docstring-parser<1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading docstring_parser-0.16-py3-none-any.whl (36 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting google-cloud-storage<3.0.0dev,>=1.32.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading google_cloud_storage-2.19.0-py2.py3-none-any.whl (131 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 131.8/131.8 kB 233.5 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting protobuf!=4.21.0,!=4.21.1,!=4.21.2,!=4.21.3,!=4.21.4,!=4.21.5,<6.0.0dev,>=3.20.2
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading protobuf-5.29.3-cp38-abi3-manylinux2014_x86_64.whl (319 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 319.7/319.7 kB 252.1 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting packaging>=14.3
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading packaging-24.2-py3-none-any.whl (65 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 65.5/65.5 kB 217.9 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting google-api-core[grpc]!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.4.*,!=2.5.*,!=2.6.*,!=2.7.*,<3.0.0dev,>=1.34.1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading google_api_core-2.24.1-py3-none-any.whl (160 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 160.1/160.1 kB 228.0 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting docker>=5.0.3
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading docker-7.1.0-py3-none-any.whl (147 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 147.8/147.8 kB 244.5 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting fastapi<=0.114.0,>=0.71.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading fastapi-0.114.0-py3-none-any.whl (94 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 94.0/94.0 kB 211.1 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting httpx<0.25.0,>=0.23.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading httpx-0.24.1-py3-none-any.whl (75 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 75.4/75.4 kB 209.7 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting starlette>=0.17.1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading starlette-0.46.0-py3-none-any.whl (71 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 72.0/72.0 kB 217.4 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting uvicorn[standard]>=0.16.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading uvicorn-0.34.0-py3-none-any.whl (62 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 62.3/62.3 kB 207.5 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting urllib3>=1.26.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading urllib3-2.3.0-py3-none-any.whl (128 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 128.4/128.4 kB 210.1 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting requests>=2.26.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading requests-2.32.3-py3-none-any.whl (64 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 64.9/64.9 kB 213.6 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting starlette>=0.17.1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading starlette-0.38.6-py3-none-any.whl (71 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 71.5/71.5 kB 208.3 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting googleapis-common-protos<2.0.dev0,>=1.56.2
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading googleapis_common_protos-1.68.0-py2.py3-none-any.whl (164 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 165.0/165.0 kB 242.3 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting grpcio<2.0dev,>=1.33.2
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading grpcio-1.70.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (5.9 MB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 5.9/5.9 MB 242.4 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting grpcio-status<2.0.dev0,>=1.33.2
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading grpcio_status-1.70.0-py3-none-any.whl (14 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting cachetools<6.0,>=2.0.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading cachetools-5.5.2-py3-none-any.whl (10 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting rsa<5,>=3.1.4
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading rsa-4.9-py3-none-any.whl (34 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting pyasn1-modules>=0.2.1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading pyasn1_modules-0.4.1-py3-none-any.whl (181 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 181.5/181.5 kB 187.1 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting google-cloud-core<3.0.0dev,>=2.4.1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading google_cloud_core-2.4.2-py2.py3-none-any.whl (29 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting google-resumable-media<3.0dev,>=2.0.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading google_resumable_media-2.7.2-py2.py3-none-any.whl (81 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 81.3/81.3 kB 226.2 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting python-dateutil<3.0dev,>=2.7.3
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading python_dateutil-2.9.0.post0-py2.py3-none-any.whl (229 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 229.9/229.9 kB 247.1 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting grpc-google-iam-v1<1.0.0dev,>=0.14.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading grpc_google_iam_v1-0.14.0-py2.py3-none-any.whl (27 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting google-crc32c<2.0dev,>=1.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading google_crc32c-1.6.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (37 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting httpcore<0.18.0,>=0.15.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading httpcore-0.17.3-py3-none-any.whl (74 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 74.5/74.5 kB 206.0 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting certifi
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading certifi-2025.1.31-py3-none-any.whl (166 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 166.4/166.4 kB 248.8 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting idna
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading idna-3.10-py3-none-any.whl (70 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 70.4/70.4 kB 201.9 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting sniffio
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading sniffio-1.3.1-py3-none-any.whl (10 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting annotated-types>=0.6.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading annotated_types-0.7.0-py3-none-any.whl (13 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting pydantic-core==2.27.2
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading pydantic_core-2.27.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.0 MB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.0/2.0 MB 252.0 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting anyio<5,>=3.4.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading anyio-4.8.0-py3-none-any.whl (96 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 96.0/96.0 kB 229.7 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting h11>=0.8
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading h11-0.14.0-py3-none-any.whl (58 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 58.3/58.3 kB 203.4 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting click>=7.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading click-8.1.8-py3-none-any.whl (98 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 98.2/98.2 kB 238.6 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting python-dotenv>=0.13
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading python_dotenv-1.0.1-py3-none-any.whl (19 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting uvloop!=0.15.0,!=0.15.1,>=0.14.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading uvloop-0.21.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (3.8 MB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.8/3.8 MB 185.7 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting httptools>=0.6.3
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading httptools-0.6.4-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (442 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 442.1/442.1 kB 240.0 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting websockets>=10.4
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading websockets-15.0-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (180 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 180.9/180.9 kB 245.6 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting watchfiles>=0.13
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading watchfiles-1.0.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (452 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 452.9/452.9 kB 249.9 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting pyyaml>=5.1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading PyYAML-6.0.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (751 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 751.2/751.2 kB 172.4 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting exceptiongroup>=1.0.2
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading exceptiongroup-1.2.2-py3-none-any.whl (16 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting pyasn1<0.7.0,>=0.4.6
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading pyasn1-0.6.1-py3-none-any.whl (83 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 83.1/83.1 kB 204.8 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting six>=1.5
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading six-1.17.0-py2.py3-none-any.whl (11 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting charset-normalizer<4,>=2
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading charset_normalizer-3.4.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (146 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 146.1/146.1 kB 232.9 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Installing collected packages: websockets, uvloop, urllib3, typing-extensions, threadpoolctl, sniffio, six, pyyaml, python-dotenv, pyasn1, protobuf, packaging, numpy, joblib, idna, httptools, h11, grpcio, google-crc32c, exceptiongroup, docstring-parser, click, charset-normalizer, certifi, cachetools, annotated-types, uvicorn, shapely, scipy, rsa, requests, python-dateutil, pydantic-core, pyasn1-modules, proto-plus, googleapis-common-protos, google-resumable-media, anyio, watchfiles, starlette, scikit-learn, pydantic, httpcore, grpcio-status, google-auth, docker, httpx, grpc-google-iam-v1, google-api-core, fastapi, google-cloud-core, google-cloud-storage, google-cloud-resource-manager, google-cloud-bigquery, google-cloud-aiplatform
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: websockets
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: websockets 15.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling websockets-15.0:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled websockets-15.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: uvloop
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: uvloop 0.21.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling uvloop-0.21.0:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled uvloop-0.21.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: urllib3
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: urllib3 2.3.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling urllib3-2.3.0:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled urllib3-2.3.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: typing-extensions
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: typing_extensions 4.12.2
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling typing_extensions-4.12.2:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled typing_extensions-4.12.2
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: sniffio
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: sniffio 1.3.1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling sniffio-1.3.1:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled sniffio-1.3.1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: six
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: six 1.17.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling six-1.17.0:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled six-1.17.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: pyyaml
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: PyYAML 6.0.2
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling PyYAML-6.0.2:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled PyYAML-6.0.2
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: python-dotenv
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: python-dotenv 1.0.1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling python-dotenv-1.0.1:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled python-dotenv-1.0.1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: pyasn1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: pyasn1 0.6.1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling pyasn1-0.6.1:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled pyasn1-0.6.1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: protobuf
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: protobuf 5.29.3
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling protobuf-5.29.3:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled protobuf-5.29.3
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: packaging
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: packaging 24.2
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling packaging-24.2:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled packaging-24.2
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: numpy
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: numpy 2.2.3
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling numpy-2.2.3:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled numpy-2.2.3
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: idna
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: idna 3.10
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling idna-3.10:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled idna-3.10
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: httptools
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: httptools 0.6.4
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling httptools-0.6.4:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled httptools-0.6.4
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: h11
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: h11 0.14.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling h11-0.14.0:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled h11-0.14.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: grpcio
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: grpcio 1.70.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling grpcio-1.70.0:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled grpcio-1.70.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: google-crc32c
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: google-crc32c 1.6.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling google-crc32c-1.6.0:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled google-crc32c-1.6.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: exceptiongroup
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: exceptiongroup 1.2.2
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling exceptiongroup-1.2.2:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled exceptiongroup-1.2.2
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: docstring-parser
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: docstring_parser 0.16
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling docstring_parser-0.16:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled docstring_parser-0.16
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: click
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: click 8.1.8
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling click-8.1.8:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled click-8.1.8
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: charset-normalizer
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: charset-normalizer 3.4.1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling charset-normalizer-3.4.1:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled charset-normalizer-3.4.1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: certifi
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: certifi 2025.1.31
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling certifi-2025.1.31:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled certifi-2025.1.31
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: cachetools
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: cachetools 5.5.2
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling cachetools-5.5.2:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled cachetools-5.5.2
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: annotated-types
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: annotated-types 0.7.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling annotated-types-0.7.0:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled annotated-types-0.7.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: uvicorn
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: uvicorn 0.34.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling uvicorn-0.34.0:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled uvicorn-0.34.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: shapely
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: shapely 2.0.7
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling shapely-2.0.7:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled shapely-2.0.7
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: rsa
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: rsa 4.9
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling rsa-4.9:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled rsa-4.9
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: requests
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: requests 2.32.3
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling requests-2.32.3:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled requests-2.32.3
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: python-dateutil
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: python-dateutil 2.9.0.post0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling python-dateutil-2.9.0.post0:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled python-dateutil-2.9.0.post0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: pydantic-core
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: pydantic_core 2.27.2
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling pydantic_core-2.27.2:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled pydantic_core-2.27.2
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: pyasn1-modules
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: pyasn1_modules 0.4.1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling pyasn1_modules-0.4.1:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled pyasn1_modules-0.4.1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: proto-plus
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: proto-plus 1.26.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling proto-plus-1.26.0:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled proto-plus-1.26.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: googleapis-common-protos
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: googleapis-common-protos 1.68.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling googleapis-common-protos-1.68.0:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled googleapis-common-protos-1.68.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: google-resumable-media
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: google-resumable-media 2.7.2
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling google-resumable-media-2.7.2:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled google-resumable-media-2.7.2
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: anyio
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: anyio 4.8.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling anyio-4.8.0:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled anyio-4.8.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: watchfiles
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: watchfiles 1.0.4
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling watchfiles-1.0.4:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled watchfiles-1.0.4
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: starlette
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: starlette 0.38.6
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling starlette-0.38.6:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled starlette-0.38.6
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: pydantic
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: pydantic 2.10.6
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling pydantic-2.10.6:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled pydantic-2.10.6
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: httpcore
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: httpcore 0.17.3
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling httpcore-0.17.3:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled httpcore-0.17.3
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: grpcio-status
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: grpcio-status 1.70.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling grpcio-status-1.70.0:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled grpcio-status-1.70.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: google-auth
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: google-auth 2.38.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling google-auth-2.38.0:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled google-auth-2.38.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: docker
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: docker 7.1.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling docker-7.1.0:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled docker-7.1.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: httpx
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: httpx 0.24.1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling httpx-0.24.1:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled httpx-0.24.1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: grpc-google-iam-v1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: grpc-google-iam-v1 0.14.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling grpc-google-iam-v1-0.14.0:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled grpc-google-iam-v1-0.14.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: google-api-core
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: google-api-core 2.24.1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling google-api-core-2.24.1:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled google-api-core-2.24.1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: fastapi
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: fastapi 0.114.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling fastapi-0.114.0:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled fastapi-0.114.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: google-cloud-core
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: google-cloud-core 2.4.2
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling google-cloud-core-2.4.2:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled google-cloud-core-2.4.2
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: google-cloud-storage
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: google-cloud-storage 2.19.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling google-cloud-storage-2.19.0:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled google-cloud-storage-2.19.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: google-cloud-resource-manager
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: google-cloud-resource-manager 1.14.1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling google-cloud-resource-manager-1.14.1:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled google-cloud-resource-manager-1.14.1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: google-cloud-bigquery
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: google-cloud-bigquery 3.30.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling google-cloud-bigquery-3.30.0:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled google-cloud-bigquery-3.30.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: google-cloud-aiplatform
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: google-cloud-aiplatform 1.82.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling google-cloud-aiplatform-1.82.0:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled google-cloud-aiplatform-1.82.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully installed annotated-types-0.7.0 anyio-4.8.0 cachetools-5.5.2 certifi-2025.1.31 charset-normalizer-3.4.1 click-8.1.8 docker-7.1.0 docstring-parser-0.16 exceptiongroup-1.2.2 fastapi-0.114.0 google-api-core-2.24.1 google-auth-2.38.0 google-cloud-aiplatform-1.82.0 google-cloud-bigquery-3.30.0 google-cloud-core-2.4.2 google-cloud-resource-manager-1.14.1 google-cloud-storage-2.19.0 google-crc32c-1.6.0 google-resumable-media-2.7.2 googleapis-common-protos-1.68.0 grpc-google-iam-v1-0.14.0 grpcio-1.70.0 grpcio-status-1.70.0 h11-0.14.0 httpcore-0.17.3 httptools-0.6.4 httpx-0.24.1 idna-3.10 joblib-1.4.2 numpy-2.2.3 packaging-24.2 proto-plus-1.26.0 protobuf-5.29.3 pyasn1-0.6.1 pyasn1-modules-0.4.1 pydantic-2.10.6 pydantic-core-2.27.2 python-dateutil-2.9.0.post0 python-dotenv-1.0.1 pyyaml-6.0.2 requests-2.32.3 rsa-4.9 scikit-learn-1.6.1 scipy-1.15.2 shapely-2.0.7 six-1.17.0 sniffio-1.3.1 starlette-0.38.6 threadpoolctl-3.5.0 typing-extensions-4.12.2 urllib3-2.3.0 uvicorn-0.34.0 uvloop-0.21.0 watchfiles-1.0.4 websockets-15.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 [notice] A new release of pip is available: 23.0.1 -> 25.0.1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 [notice] To update, run: pip install --upgrade pip
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Removing intermediate container 5c16d7912b65
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> e35a55d9bdb4
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully built e35a55d9bdb4
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully tagged gcr.io/ucaip-sample-tests/prediction-cpr/sklearn:20250227_233526
INFO google.cloud.aiplatform.prediction.local_endpoint:local_endpoint.py:237 Got the project id from the global config: ucaip-sample-tests.
INFO google.cloud.aiplatform.prediction.local_endpoint:local_endpoint.py:237 Got the project id from the global config: ucaip-sample-tests.
INFO root:test_prediction_cpr.py:90 CompletedProcess(args=['gcloud', 'auth', 'configure-docker'], returncode=0, stdout=b'', stderr=b'Adding credentials for all GCR repositories.\nWARNING: A long list of credential helpers may cause delays running \'docker build\'. We recommend passing the registry name to configure only the registry you are using.\nAfter update, the following will be written to your Docker config file located \nat [/root/.docker/config.json]:\n {\n "credHelpers": {\n "gcr.io": "gcloud",\n "us.gcr.io": "gcloud",\n "eu.gcr.io": "gcloud",\n "asia.gcr.io": "gcloud",\n "staging-k8s.gcr.io": "gcloud",\n "marketplace.gcr.io": "gcloud"\n }\n}\n\nDo you want to continue (Y/n)? \nDocker configuration file updated.\n')
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 The push refers to repository [gcr.io/ucaip-sample-tests/prediction-cpr/sklearn]
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 61d9712a39a8: Preparing
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 aa48bc8816f6: Preparing
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 e092c372e690: Preparing
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 7dc9c93f38ed: Preparing
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 fb29abb2209e: Preparing
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 784c5d2bb2c2: Preparing
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ecbadaa33ad9: Preparing
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 4b017a36fd9c: Preparing
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 20a9b386e10e: Preparing
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 f8217d7865d2: Preparing
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 01c9a2a5f237: Preparing
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 4b017a36fd9c: Waiting
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 20a9b386e10e: Waiting
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 f8217d7865d2: Waiting
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 01c9a2a5f237: Waiting
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 784c5d2bb2c2: Waiting
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ecbadaa33ad9: Waiting
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 error parsing HTTP 412 response body: invalid character 'C' looking for beginning of value: "Container Registry is deprecated and shutting down, please use the auto migration tool to migrate to Artifact Registry. For more details see: https://cloud.google.com/artifact-registry/docs/transition/auto-migrate-gcr-ar"
___________ TestExperimentModel.test_deploy_model_with_gpu_container ___________
[gw10] linux -- Python 3.10.15 /tmpfs/src/github/python-aiplatform/.nox/system-3-10/bin/python
self =
shared_state = {'bucket': , 'resources': [}
def test_deploy_model_with_gpu_container(self, shared_state):
aiplatform.init(
project=e2e_base._PROJECT,
location=e2e_base._LOCATION,
staging_bucket=f"gs://{shared_state['staging_bucket_name']}",
)
# It takes long time to deploy a model. To reduce the system test run
# time, we randomly choose one registered model to test deployment.
> registered_model = random.choice(self.registered_models_gpu)
tests/system/aiplatform/test_experiment_model.py:357:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = , seq = []
def choice(self, seq):
"""Choose a random element from a non-empty sequence."""
# raises IndexError if seq is empty
> return seq[self._randbelow(len(seq))]
E IndexError: list index out of range
/usr/local/lib/python3.10/random.py:378: IndexError
---------------------------- Captured log teardown -----------------------------
INFO google.cloud.aiplatform.models:base.py:189 Undeploying Endpoint model: projects/580378083368/locations/us-central1/endpoints/1158837976675909632
INFO google.cloud.aiplatform.models:base.py:209 Undeploy Endpoint model backing LRO: projects/580378083368/locations/us-central1/endpoints/1158837976675909632/operations/725088439178887168
INFO google.cloud.aiplatform.models:base.py:222 Endpoint model undeployed. Resource name: projects/580378083368/locations/us-central1/endpoints/1158837976675909632
INFO google.cloud.aiplatform.base:base.py:189 Deleting Endpoint : projects/580378083368/locations/us-central1/endpoints/1158837976675909632
INFO google.cloud.aiplatform.base:base.py:222 Endpoint deleted. . Resource name: projects/580378083368/locations/us-central1/endpoints/1158837976675909632
INFO google.cloud.aiplatform.base:base.py:156 Deleting Endpoint resource: projects/580378083368/locations/us-central1/endpoints/1158837976675909632
INFO google.cloud.aiplatform.base:base.py:161 Delete Endpoint backing LRO: projects/580378083368/locations/us-central1/operations/8724607277295730688
INFO google.cloud.aiplatform.base:base.py:174 Endpoint resource projects/580378083368/locations/us-central1/endpoints/1158837976675909632 deleted.
INFO google.cloud.aiplatform.base:base.py:189 Deleting ExperimentModel : projects/580378083368/locations/us-central1/metadataStores/default/artifacts/sk-model
INFO google.cloud.aiplatform.base:base.py:222 ExperimentModel deleted. . Resource name: projects/580378083368/locations/us-central1/metadataStores/default/artifacts/sk-model
INFO google.cloud.aiplatform.base:base.py:156 Deleting ExperimentModel resource: projects/580378083368/locations/us-central1/metadataStores/default/artifacts/sk-model
INFO google.cloud.aiplatform.base:base.py:161 Delete ExperimentModel backing LRO: projects/580378083368/locations/us-central1/metadataStores/default/artifacts/sk-model/operations/8008534936543821824
INFO google.cloud.aiplatform.base:base.py:174 ExperimentModel resource projects/580378083368/locations/us-central1/metadataStores/default/artifacts/sk-model deleted.
INFO google.cloud.aiplatform.base:base.py:189 Deleting Model : projects/580378083368/locations/us-central1/models/7855593865552592896
INFO google.cloud.aiplatform.base:base.py:222 Model deleted. . Resource name: projects/580378083368/locations/us-central1/models/7855593865552592896
INFO google.cloud.aiplatform.base:base.py:156 Deleting Model resource: projects/580378083368/locations/us-central1/models/7855593865552592896
INFO google.cloud.aiplatform.base:base.py:161 Delete Model backing LRO: projects/580378083368/locations/us-central1/models/7855593865552592896/operations/736347438247313408
INFO google.cloud.aiplatform.base:base.py:174 Model resource projects/580378083368/locations/us-central1/models/7855593865552592896 deleted.
_ TestGenerativeModels.test_generate_content_function_calling[grpc-PROD_ENDPOINT] _
[gw11] linux -- Python 3.10.15 /tmpfs/src/github/python-aiplatform/.nox/system-3-10/bin/python
self =
api_endpoint_env_name = 'PROD_ENDPOINT'
def test_generate_content_function_calling(self, api_endpoint_env_name):
get_current_weather_func = generative_models.FunctionDeclaration(
name="get_current_weather",
description="Get the current weather in a given location",
parameters=_REQUEST_FUNCTION_PARAMETER_SCHEMA_STRUCT,
)
weather_tool = generative_models.Tool(
function_declarations=[get_current_weather_func],
)
model = generative_models.GenerativeModel(
GEMINI_MODEL_NAME,
# Specifying the tools once to avoid specifying them in every request
tools=[weather_tool],
)
# Define the user's prompt in a Content object that we can reuse in model calls
prompt = "What is the weather like in Boston?"
user_prompt_content = generative_models.Content(
role="user",
parts=[
generative_models.Part.from_text(prompt),
],
)
# Send the prompt and instruct the model to generate content using the Tool
response = model.generate_content(
user_prompt_content,
generation_config={"temperature": 0},
tools=[weather_tool],
)
response_function_call_content = response.candidates[0].content
assert (
response.candidates[0].content.parts[0].function_call.name
== "get_current_weather"
)
assert response.candidates[0].function_calls[0].args["location"]
assert len(response.candidates[0].function_calls) == 1
> assert (
response.candidates[0].function_calls[0]
== response.candidates[0].content.parts[0].function_call
)
E assert name: "get_current_weather"\nargs {\n fields {\n key: "location"\n value {\n string_value: "Boston, MA"\n }\n }\n}\n == name: "get_current_weather"\nargs {\n fields {\n key: "location"\n value {\n string_value: "Boston, MA"\n }\n }\n}\n
E + where name: "get_current_weather"\nargs {\n fields {\n key: "location"\n value {\n string_value: "Boston, MA"\n }\n }\n}\n = function_call {\n name: "get_current_weather"\n args {\n fields {\n key: "location"\n value {\n string_value: "Boston, MA"\n }\n }\n }\n}\n.function_call
tests/system/vertexai/test_generative_models.py:565: AssertionError
_ TestGenerativeModels.test_generate_content_function_calling[rest-PROD_ENDPOINT] _
[gw11] linux -- Python 3.10.15 /tmpfs/src/github/python-aiplatform/.nox/system-3-10/bin/python
self =
api_endpoint_env_name = 'PROD_ENDPOINT'
def test_generate_content_function_calling(self, api_endpoint_env_name):
get_current_weather_func = generative_models.FunctionDeclaration(
name="get_current_weather",
description="Get the current weather in a given location",
parameters=_REQUEST_FUNCTION_PARAMETER_SCHEMA_STRUCT,
)
weather_tool = generative_models.Tool(
function_declarations=[get_current_weather_func],
)
model = generative_models.GenerativeModel(
GEMINI_MODEL_NAME,
# Specifying the tools once to avoid specifying them in every request
tools=[weather_tool],
)
# Define the user's prompt in a Content object that we can reuse in model calls
prompt = "What is the weather like in Boston?"
user_prompt_content = generative_models.Content(
role="user",
parts=[
generative_models.Part.from_text(prompt),
],
)
# Send the prompt and instruct the model to generate content using the Tool
response = model.generate_content(
user_prompt_content,
generation_config={"temperature": 0},
tools=[weather_tool],
)
response_function_call_content = response.candidates[0].content
assert (
response.candidates[0].content.parts[0].function_call.name
== "get_current_weather"
)
assert response.candidates[0].function_calls[0].args["location"]
assert len(response.candidates[0].function_calls) == 1
> assert (
response.candidates[0].function_calls[0]
== response.candidates[0].content.parts[0].function_call
)
E assert name: "get_current_weather"\nargs {\n fields {\n key: "location"\n value {\n string_value: "Boston, MA"\n }\n }\n}\n == name: "get_current_weather"\nargs {\n fields {\n key: "location"\n value {\n string_value: "Boston, MA"\n }\n }\n}\n
E + where name: "get_current_weather"\nargs {\n fields {\n key: "location"\n value {\n string_value: "Boston, MA"\n }\n }\n}\n = function_call {\n name: "get_current_weather"\n args {\n fields {\n key: "location"\n value {\n string_value: "Boston, MA"\n }\n }\n }\n}\n.function_call
tests/system/vertexai/test_generative_models.py:565: AssertionError
_ TestTokenization.test_count_tokens_content_is_function_response[gemini-1.0-pro-PROD_ENDPOINT] _
[gw14] linux -- Python 3.10.15 /tmpfs/src/github/python-aiplatform/.nox/system-3-10/bin/python
self =
model_name = 'gemini-1.0-pro'
@pytest.mark.parametrize(
"model_name",
_MODELS,
)
def test_count_tokens_content_is_function_response(self, model_name):
part = Part._from_gapic(
gapic_content_types.Part(function_response=_FUNCTION_RESPONSE)
)
tokenizer = tokenizer_preview(model_name)
model = GenerativeModel(model_name)
assert tokenizer.count_tokens(part).total_tokens
> assert (
tokenizer.count_tokens(part).total_tokens
== model.count_tokens(part).total_tokens
)
E assert 7 == 0
E + where 7 = CountTokensResult(total_tokens=7).total_tokens
E + where CountTokensResult(total_tokens=7) = count_tokens(function_response {\n name: "function_response"\n response {\n fields {\n key: "string_key"\n value {\n string_value: "value"\n }\n }\n }\n}\n)
E + where count_tokens = .count_tokens
E + and 0 = total_billable_characters: 32\nprompt_tokens_details {\n modality: TEXT\n}\n.total_tokens
E + where total_billable_characters: 32\nprompt_tokens_details {\n modality: TEXT\n}\n = count_tokens(function_response {\n name: "function_response"\n response {\n fields {\n key: "string_key"\n value {\n string_value: "value"\n }\n }\n }\n}\n)
E + where count_tokens = .count_tokens
tests/system/vertexai/test_tokenization.py:284: AssertionError
_ TestTokenization.test_count_tokens_content_is_function_response[gemini-1.5-pro-PROD_ENDPOINT] _
[gw14] linux -- Python 3.10.15 /tmpfs/src/github/python-aiplatform/.nox/system-3-10/bin/python
self =
model_name = 'gemini-1.5-pro'
@pytest.mark.parametrize(
"model_name",
_MODELS,
)
def test_count_tokens_content_is_function_response(self, model_name):
part = Part._from_gapic(
gapic_content_types.Part(function_response=_FUNCTION_RESPONSE)
)
tokenizer = tokenizer_preview(model_name)
model = GenerativeModel(model_name)
assert tokenizer.count_tokens(part).total_tokens
> assert (
tokenizer.count_tokens(part).total_tokens
== model.count_tokens(part).total_tokens
)
E assert 7 == 0
E + where 7 = CountTokensResult(total_tokens=7).total_tokens
E + where CountTokensResult(total_tokens=7) = count_tokens(function_response {\n name: "function_response"\n response {\n fields {\n key: "string_key"\n value {\n string_value: "value"\n }\n }\n }\n}\n)
E + where count_tokens = .count_tokens
E + and 0 = total_billable_characters: 32\nprompt_tokens_details {\n modality: TEXT\n}\n.total_tokens
E + where total_billable_characters: 32\nprompt_tokens_details {\n modality: TEXT\n}\n = count_tokens(function_response {\n name: "function_response"\n response {\n fields {\n key: "string_key"\n value {\n string_value: "value"\n }\n }\n }\n}\n)
E + where count_tokens = .count_tokens
tests/system/vertexai/test_tokenization.py:284: AssertionError
_ TestTokenization.test_count_tokens_content_is_function_response[gemini-1.5-flash-PROD_ENDPOINT] _
[gw14] linux -- Python 3.10.15 /tmpfs/src/github/python-aiplatform/.nox/system-3-10/bin/python
self =
model_name = 'gemini-1.5-flash'
@pytest.mark.parametrize(
"model_name",
_MODELS,
)
def test_count_tokens_content_is_function_response(self, model_name):
part = Part._from_gapic(
gapic_content_types.Part(function_response=_FUNCTION_RESPONSE)
)
tokenizer = tokenizer_preview(model_name)
model = GenerativeModel(model_name)
assert tokenizer.count_tokens(part).total_tokens
> assert (
tokenizer.count_tokens(part).total_tokens
== model.count_tokens(part).total_tokens
)
E assert 7 == 0
E + where 7 = CountTokensResult(total_tokens=7).total_tokens
E + where CountTokensResult(total_tokens=7) = count_tokens(function_response {\n name: "function_response"\n response {\n fields {\n key: "string_key"\n value {\n string_value: "value"\n }\n }\n }\n}\n)
E + where count_tokens = .count_tokens
E + and 0 = total_billable_characters: 32\nprompt_tokens_details {\n modality: TEXT\n}\n.total_tokens
E + where total_billable_characters: 32\nprompt_tokens_details {\n modality: TEXT\n}\n = count_tokens(function_response {\n name: "function_response"\n response {\n fields {\n key: "string_key"\n value {\n string_value: "value"\n }\n }\n }\n}\n)
E + where count_tokens = .count_tokens
tests/system/vertexai/test_tokenization.py:284: AssertionError
_ TestTokenization.test_count_tokens_content_is_function_response[gemini-1.5-flash-002-PROD_ENDPOINT] _
[gw14] linux -- Python 3.10.15 /tmpfs/src/github/python-aiplatform/.nox/system-3-10/bin/python
self =
model_name = 'gemini-1.5-flash-002'
@pytest.mark.parametrize(
"model_name",
_MODELS,
)
def test_count_tokens_content_is_function_response(self, model_name):
part = Part._from_gapic(
gapic_content_types.Part(function_response=_FUNCTION_RESPONSE)
)
tokenizer = tokenizer_preview(model_name)
model = GenerativeModel(model_name)
assert tokenizer.count_tokens(part).total_tokens
> assert (
tokenizer.count_tokens(part).total_tokens
== model.count_tokens(part).total_tokens
)
E assert 7 == 0
E + where 7 = CountTokensResult(total_tokens=7).total_tokens
E + where CountTokensResult(total_tokens=7) = count_tokens(function_response {\n name: "function_response"\n response {\n fields {\n key: "string_key"\n value {\n string_value: "value"\n }\n }\n }\n}\n)
E + where count_tokens = .count_tokens
E + and 0 = total_billable_characters: 32\nprompt_tokens_details {\n modality: TEXT\n}\n.total_tokens
E + where total_billable_characters: 32\nprompt_tokens_details {\n modality: TEXT\n}\n = count_tokens(function_response {\n name: "function_response"\n response {\n fields {\n key: "string_key"\n value {\n string_value: "value"\n }\n }\n }\n}\n)
E + where count_tokens = .count_tokens
tests/system/vertexai/test_tokenization.py:284: AssertionError
_ TestTokenization.test_count_tokens_content_is_function_response[gemini-1.5-pro-002-PROD_ENDPOINT] _
[gw14] linux -- Python 3.10.15 /tmpfs/src/github/python-aiplatform/.nox/system-3-10/bin/python
self =
model_name = 'gemini-1.5-pro-002'
@pytest.mark.parametrize(
"model_name",
_MODELS,
)
def test_count_tokens_content_is_function_response(self, model_name):
part = Part._from_gapic(
gapic_content_types.Part(function_response=_FUNCTION_RESPONSE)
)
tokenizer = tokenizer_preview(model_name)
model = GenerativeModel(model_name)
assert tokenizer.count_tokens(part).total_tokens
> assert (
tokenizer.count_tokens(part).total_tokens
== model.count_tokens(part).total_tokens
)
E assert 7 == 0
E + where 7 = CountTokensResult(total_tokens=7).total_tokens
E + where CountTokensResult(total_tokens=7) = count_tokens(function_response {\n name: "function_response"\n response {\n fields {\n key: "string_key"\n value {\n string_value: "value"\n }\n }\n }\n}\n)
E + where count_tokens = .count_tokens
E + and 0 = total_billable_characters: 32\nprompt_tokens_details {\n modality: TEXT\n}\n.total_tokens
E + where total_billable_characters: 32\nprompt_tokens_details {\n modality: TEXT\n}\n = count_tokens(function_response {\n name: "function_response"\n response {\n fields {\n key: "string_key"\n value {\n string_value: "value"\n }\n }\n }\n}\n)
E + where count_tokens = .count_tokens
tests/system/vertexai/test_tokenization.py:284: AssertionError
________________________ TestRayData.test_ray_data[2.9] ________________________
[gw13] linux -- Python 3.10.15 /tmpfs/src/github/python-aiplatform/.nox/system-3-10/bin/python
self =
cluster_ray_version = '2.9'
@pytest.mark.parametrize("cluster_ray_version", ["2.9", "2.33"])
def test_ray_data(self, cluster_ray_version):
head_node_type = vertex_ray.Resources()
worker_node_types = [
vertex_ray.Resources(),
vertex_ray.Resources(),
vertex_ray.Resources(),
]
assert ray.__version__ == RAY_VERSION
aiplatform.init(project=PROJECT_ID, location="us-central1")
timestamp = datetime.datetime.now().strftime("%Y-%m-%d-%H-%M-%S")
# Create cluster, get dashboard address
> cluster_resource_name = vertex_ray.create_ray_cluster(
head_node_type=head_node_type,
worker_node_types=worker_node_types,
cluster_name=f"ray-cluster-{timestamp}-test-ray-data",
ray_version=cluster_ray_version,
)
tests/system/vertex_ray/test_ray_data.py:100:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
google/cloud/aiplatform/vertex_ray/cluster_init.py:373: in create_ray_cluster
response = _gapic_utils.get_persistent_resource(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
persistent_resource_name = 'projects/580378083368/locations/us-central1/persistentResources/ray-cluster-2025-02-27-23-43-54-test-ray-data'
tolerance = 1
def get_persistent_resource(
persistent_resource_name: str, tolerance: Optional[int] = 0
):
"""Get persistent resource.
Args:
persistent_resource_name:
"projects//locations//persistentResources/".
tolerance: number of attemps to get persistent resource.
Returns:
aiplatform_v1.PersistentResource if state is RUNNING.
Raises:
ValueError: Invalid cluster resource name.
RuntimeError: Service returns error.
RuntimeError: Cluster resource state is STOPPING.
RuntimeError: Cluster resource state is ERROR.
"""
client = create_persistent_resource_client()
request = GetPersistentResourceRequest(name=persistent_resource_name)
# TODO(b/277117901): Add test cases for polling and error handling
num_attempts = 0
while True:
try:
response = client.get_persistent_resource(request)
except exceptions.NotFound:
response = None
if num_attempts >= tolerance:
raise ValueError(
"[Ray on Vertex AI]: Invalid cluster_resource_name (404 not found)."
)
if response:
if response.error.message:
logging.error("[Ray on Vertex AI]: %s" % response.error.message)
> raise RuntimeError("[Ray on Vertex AI]: Cluster returned an error.")
E RuntimeError: [Ray on Vertex AI]: Cluster returned an error.
google/cloud/aiplatform/vertex_ray/util/_gapic_utils.py:115: RuntimeError
----------------------------- Captured stdout call -----------------------------
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 1; sleeping for 0:02:30 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 2; sleeping for 0:01:54.750000 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 3; sleeping for 0:01:27.783750 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 4; sleeping for 0:01:07.154569 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 5; sleeping for 0:00:51.373245 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 6; sleeping for 0:00:39.300532 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 7; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 8; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 9; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 10; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 11; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 12; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 13; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 14; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 15; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 16; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 17; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 18; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 19; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 20; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 21; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 22; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 23; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 24; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 25; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 26; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 27; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 28; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 29; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 30; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 31; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 32; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 33; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 34; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 35; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 36; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 37; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 38; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 39; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 40; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 41; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 42; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 43; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 44; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 45; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 46; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 47; sleeping for 0:00:30.064907 seconds
------------------------------ Captured log call -------------------------------
ERROR root:_gapic_utils.py:114 [Ray on Vertex AI]: An internal error occurred on your cluster. Please try recreating one in a few minutes. If you still experience errors, contact Cloud AI Platform.
_______________________ TestRayData.test_ray_data[2.33] ________________________
[gw13] linux -- Python 3.10.15 /tmpfs/src/github/python-aiplatform/.nox/system-3-10/bin/python
args = (parent: "projects/580378083368/locations/us-central1"
persistent_resource {
resource_pools {
id: "head-node"
... }
ray_logs_spec {
}
}
}
}
persistent_resource_id: "ray-cluster-2025-02-28-00-13-06-test-ray-data"
,)
kwargs = {'metadata': [('x-goog-request-params', 'parent=projects/580378083368/locations/us-central1'), ('x-goog-api-client', '.../1.82.0+vertex_ray+top_google_constructor_method+google.cloud.aiplatform.vertex_ray.cluster_init.create_ray_cluster')]}
@functools.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
try:
> return callable_(*args, **kwargs)
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/grpc_helpers.py:76:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
.nox/system-3-10/lib/python3.10/site-packages/grpc/_interceptor.py:247: in __call__
response, ignored_call = self._with_call(request,
.nox/system-3-10/lib/python3.10/site-packages/grpc/_interceptor.py:290: in _with_call
return call.result(), call
.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:343: in result
raise self
.nox/system-3-10/lib/python3.10/site-packages/grpc/_interceptor.py:274: in continuation
response, call = self._thunk(new_method).with_call(
.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:957: in with_call
return _end_unary_response_blocking(state, call, True, None)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
state =
call =
with_call = True, deadline = None
def _end_unary_response_blocking(state, call, with_call, deadline):
if state.code is grpc.StatusCode.OK:
if with_call:
rendezvous = _MultiThreadedRendezvous(state, call, None, deadline)
return state.response, rendezvous
else:
return state.response
else:
> raise _InactiveRpcError(state)
E grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
E status = StatusCode.FAILED_PRECONDITION
E details = "You have already provisioned the maximum number of PersistentResources in this region. Please switch to a different region or delete one or more PersistentResources in this region before creating another."
E debug_error_string = "UNKNOWN:Error received from peer ipv4:142.250.107.95:443 {created_time:"2025-02-28T00:13:07.430087541+00:00", grpc_status:9, grpc_message:"You have already provisioned the maximum number of PersistentResources in this region. Please switch to a different region or delete one or more PersistentResources in this region before creating another."}"
E >
.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:849: _InactiveRpcError
The above exception was the direct cause of the following exception:
head_node_type = Resources(machine_type='n1-standard-16', node_count=1, accelerator_type=None, accelerator_count=0, boot_disk_type='pd-ssd', boot_disk_size_gb=100, custom_image=None, autoscaling_spec=None)
python_version = '3.10', ray_version = '2.33', network = None
service_account = None
cluster_name = 'ray-cluster-2025-02-28-00-13-06-test-ray-data'
worker_node_types = [Resources(machine_type='n1-standard-16', node_count=1, accelerator_type=None, accelerator_count=0, boot_disk_type='pd...e=None, accelerator_count=0, boot_disk_type='pd-ssd', boot_disk_size_gb=100, custom_image=None, autoscaling_spec=None)]
custom_images = None, enable_metrics_collection = True, enable_logging = True
psc_interface_config = None, reserved_ip_ranges = None, nfs_mounts = None
labels = None
def create_ray_cluster(
head_node_type: Optional[resources.Resources] = resources.Resources(),
python_version: Optional[str] = "3.10",
ray_version: Optional[str] = "2.33",
network: Optional[str] = None,
service_account: Optional[str] = None,
cluster_name: Optional[str] = None,
worker_node_types: Optional[List[resources.Resources]] = [resources.Resources()],
custom_images: Optional[resources.NodeImages] = None,
enable_metrics_collection: Optional[bool] = True,
enable_logging: Optional[bool] = True,
psc_interface_config: Optional[resources.PscIConfig] = None,
reserved_ip_ranges: Optional[List[str]] = None,
nfs_mounts: Optional[List[resources.NfsMount]] = None,
labels: Optional[Dict[str, str]] = None,
) -> str:
"""Create a ray cluster on the Vertex AI.
Sample usage:
from vertex_ray import Resources
head_node_type = Resources(
machine_type="n1-standard-8",
node_count=1,
accelerator_type="NVIDIA_TESLA_K80",
accelerator_count=1,
custom_image="us-docker.pkg.dev/my-project/ray-cpu-image.2.9:latest", # Optional
)
worker_node_types = [Resources(
machine_type="n1-standard-8",
node_count=2,
accelerator_type="NVIDIA_TESLA_K80",
accelerator_count=1,
custom_image="us-docker.pkg.dev/my-project/ray-gpu-image.2.9:latest", # Optional
)]
cluster_resource_name = vertex_ray.create_ray_cluster(
head_node_type=head_node_type,
network="projects/my-project-number/global/networks/my-vpc-name", # Optional
service_account="my-service-account@my-project-number.iam.gserviceaccount.com", # Optional
cluster_name="my-cluster-name", # Optional
worker_node_types=worker_node_types,
ray_version="2.9",
)
After a ray cluster is set up, you can call
`ray.init(f"vertex_ray://{cluster_resource_name}", runtime_env=...)` without
specifying ray cluster address to connect to the cluster. To shut down the
cluster you can call `ray.delete_ray_cluster()`.
Note: If the active ray cluster has not finished shutting down, you cannot
create a new ray cluster with the same cluster_name.
Args:
head_node_type: The head node resource. Resources.node_count must be 1.
If not set, default value of Resources() class will be used.
python_version: Python version for the ray cluster.
ray_version: Ray version for the ray cluster. Default is 2.33.0.
network: Virtual private cloud (VPC) network. For Ray Client, VPC
peering is required to connect to the Ray Cluster managed in the
Vertex API service. For Ray Job API, VPC network is not required
because Ray Cluster connection can be accessed through dashboard
address.
service_account: Service account to be used for running Ray programs on
the cluster.
cluster_name: This value may be up to 63 characters, and valid
characters are `[a-z0-9_-]`. The first character cannot be a number
or hyphen.
worker_node_types: The list of Resources of the worker nodes. The same
Resources object should not appear multiple times in the list.
custom_images: The NodeImages which specifies head node and worker nodes
images. All the workers will share the same image. If each Resource
has a specific custom image, use `Resources.custom_image` for
head/worker_node_type(s). Note that configuring `Resources.custom_image`
will override `custom_images` here. Allowlist only.
enable_metrics_collection: Enable Ray metrics collection for visualization.
enable_logging: Enable exporting Ray logs to Cloud Logging.
psc_interface_config: PSC-I config.
reserved_ip_ranges: A list of names for the reserved IP ranges under
the VPC network that can be used for this cluster. If set, we will
deploy the cluster within the provided IP ranges. Otherwise, the
cluster is deployed to any IP ranges under the provided VPC network.
Example: ["vertex-ai-ip-range"].
labels:
The labels with user-defined metadata to organize Ray cluster.
Label keys and values can be no longer than 64 characters (Unicode
codepoints), can only contain lowercase letters, numeric characters,
underscores and dashes. International characters are allowed.
See https://goo.gl/xmQnxf for more information and examples of labels.
Returns:
The cluster_resource_name of the initiated Ray cluster on Vertex.
Raise:
ValueError: If the cluster is not created successfully.
RuntimeError: If the ray_version is 2.4.
"""
if network is None:
logging.info(
"[Ray on Vertex]: No VPC network configured. It is required for client connection."
)
if ray_version == "2.4":
raise RuntimeError(_V2_4_WARNING_MESSAGE)
if ray_version == "2.9.3":
warnings.warn(_V2_9_WARNING_MESSAGE, DeprecationWarning, stacklevel=1)
local_ray_verion = _validation_utils.get_local_ray_version()
if ray_version != local_ray_verion:
if custom_images is None and head_node_type.custom_image is None:
install_ray_version = "2.33.0"
logging.info(
"[Ray on Vertex]: Local runtime has Ray version %s"
", but the requested cluster runtime has %s. Please "
"ensure that the Ray versions match for client connectivity. You may "
'"pip install --user --force-reinstall ray[default]==%s"'
" and restart runtime before cluster connection."
% (local_ray_verion, ray_version, install_ray_version)
)
else:
logging.info(
"[Ray on Vertex]: Local runtime has Ray version %s."
"Please ensure that the Ray versions match for client connectivity."
% local_ray_verion
)
if cluster_name is None:
cluster_name = "ray-cluster-" + utils.timestamped_unique_name()
if head_node_type:
if head_node_type.node_count != 1:
raise ValueError(
"[Ray on Vertex AI]: For head_node_type, "
+ "Resources.node_count must be 1."
)
if head_node_type.autoscaling_spec is not None:
raise ValueError(
"[Ray on Vertex AI]: For head_node_type, "
+ "Resources.autoscaling_spec must be None."
)
if (
head_node_type.accelerator_type is None
and head_node_type.accelerator_count > 0
):
raise ValueError(
"[Ray on Vertex]: accelerator_type must be specified when"
+ " accelerator_count is set to a value other than 0."
)
resource_pool_images = {}
# head node
resource_pool_0 = ResourcePool()
resource_pool_0.id = "head-node"
resource_pool_0.replica_count = head_node_type.node_count
resource_pool_0.machine_spec.machine_type = head_node_type.machine_type
resource_pool_0.machine_spec.accelerator_count = head_node_type.accelerator_count
resource_pool_0.machine_spec.accelerator_type = head_node_type.accelerator_type
resource_pool_0.disk_spec.boot_disk_type = head_node_type.boot_disk_type
resource_pool_0.disk_spec.boot_disk_size_gb = head_node_type.boot_disk_size_gb
enable_cuda = True if head_node_type.accelerator_count > 0 else False
if head_node_type.custom_image is not None:
image_uri = head_node_type.custom_image
elif custom_images is None:
image_uri = _validation_utils.get_image_uri(
ray_version, python_version, enable_cuda
)
elif custom_images.head is not None and custom_images.worker is not None:
image_uri = custom_images.head
else:
raise ValueError(
"[Ray on Vertex AI]: custom_images.head and custom_images.worker must be specified when custom_images is set."
)
resource_pool_images[resource_pool_0.id] = image_uri
worker_pools = []
i = 0
if worker_node_types:
for worker_node_type in worker_node_types:
if (
worker_node_type.accelerator_type is None
and worker_node_type.accelerator_count > 0
):
raise ValueError(
"[Ray on Vertex]: accelerator_type must be specified when"
+ " accelerator_count is set to a value other than 0."
)
additional_replica_count = resources._check_machine_spec_identical(
head_node_type, worker_node_type
)
if worker_node_type.autoscaling_spec is None:
# Worker and head share the same MachineSpec, merge them into the
# same ResourcePool
resource_pool_0.replica_count = (
resource_pool_0.replica_count + additional_replica_count
)
else:
if additional_replica_count > 0:
# Autoscaling for single ResourcePool (homogeneous cluster).
resource_pool_0.replica_count = None
resource_pool_0.autoscaling_spec.min_replica_count = (
worker_node_type.autoscaling_spec.min_replica_count
)
resource_pool_0.autoscaling_spec.max_replica_count = (
worker_node_type.autoscaling_spec.max_replica_count
)
if additional_replica_count == 0:
resource_pool = ResourcePool()
resource_pool.id = f"worker-pool{i+1}"
if worker_node_type.autoscaling_spec is None:
resource_pool.replica_count = worker_node_type.node_count
else:
# Autoscaling for worker ResourcePool.
resource_pool.autoscaling_spec.min_replica_count = (
worker_node_type.autoscaling_spec.min_replica_count
)
resource_pool.autoscaling_spec.max_replica_count = (
worker_node_type.autoscaling_spec.max_replica_count
)
resource_pool.machine_spec.machine_type = worker_node_type.machine_type
resource_pool.machine_spec.accelerator_count = (
worker_node_type.accelerator_count
)
resource_pool.machine_spec.accelerator_type = (
worker_node_type.accelerator_type
)
resource_pool.disk_spec.boot_disk_type = worker_node_type.boot_disk_type
resource_pool.disk_spec.boot_disk_size_gb = (
worker_node_type.boot_disk_size_gb
)
worker_pools.append(resource_pool)
enable_cuda = True if worker_node_type.accelerator_count > 0 else False
if worker_node_type.custom_image is not None:
image_uri = worker_node_type.custom_image
elif custom_images is None:
image_uri = _validation_utils.get_image_uri(
ray_version, python_version, enable_cuda
)
else:
image_uri = custom_images.worker
resource_pool_images[resource_pool.id] = image_uri
i += 1
resource_pools = [resource_pool_0] + worker_pools
metrics_collection_disabled = not enable_metrics_collection
ray_metric_spec = RayMetricSpec(disabled=metrics_collection_disabled)
logging_disabled = not enable_logging
ray_logs_spec = RayLogsSpec(disabled=logging_disabled)
ray_spec = RaySpec(
resource_pool_images=resource_pool_images,
ray_metric_spec=ray_metric_spec,
ray_logs_spec=ray_logs_spec,
)
if nfs_mounts:
gapic_nfs_mounts = []
for nfs_mount in nfs_mounts:
gapic_nfs_mounts.append(
NfsMount(
server=nfs_mount.server,
path=nfs_mount.path,
mount_point=nfs_mount.mount_point,
)
)
ray_spec.nfs_mounts = gapic_nfs_mounts
if service_account:
service_account_spec = ServiceAccountSpec(
enable_custom_service_account=True,
service_account=service_account,
)
resource_runtime_spec = ResourceRuntimeSpec(
ray_spec=ray_spec,
service_account_spec=service_account_spec,
)
else:
resource_runtime_spec = ResourceRuntimeSpec(ray_spec=ray_spec)
if psc_interface_config:
gapic_psc_interface_config = PscInterfaceConfig(
network_attachment=psc_interface_config.network_attachment,
)
else:
gapic_psc_interface_config = None
persistent_resource = PersistentResource(
resource_pools=resource_pools,
network=network,
labels=labels,
resource_runtime_spec=resource_runtime_spec,
psc_interface_config=gapic_psc_interface_config,
reserved_ip_ranges=reserved_ip_ranges,
)
location = initializer.global_config.location
project_id = initializer.global_config.project
project_number = resource_manager_utils.get_project_number(project_id)
parent = f"projects/{project_number}/locations/{location}"
request = persistent_resource_service.CreatePersistentResourceRequest(
parent=parent,
persistent_resource=persistent_resource,
persistent_resource_id=cluster_name,
)
client = _gapic_utils.create_persistent_resource_client()
try:
> _ = client.create_persistent_resource(request)
google/cloud/aiplatform/vertex_ray/cluster_init.py:367:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
google/cloud/aiplatform_v1beta1/services/persistent_resource_service/client.py:1006: in create_persistent_resource
response = rpc(
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/gapic_v1/method.py:131: in __call__
return wrapped_func(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
args = (parent: "projects/580378083368/locations/us-central1"
persistent_resource {
resource_pools {
id: "head-node"
... }
ray_logs_spec {
}
}
}
}
persistent_resource_id: "ray-cluster-2025-02-28-00-13-06-test-ray-data"
,)
kwargs = {'metadata': [('x-goog-request-params', 'parent=projects/580378083368/locations/us-central1'), ('x-goog-api-client', '.../1.82.0+vertex_ray+top_google_constructor_method+google.cloud.aiplatform.vertex_ray.cluster_init.create_ray_cluster')]}
@functools.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
try:
return callable_(*args, **kwargs)
except grpc.RpcError as exc:
> raise exceptions.from_grpc_error(exc) from exc
E google.api_core.exceptions.FailedPrecondition: 400 You have already provisioned the maximum number of PersistentResources in this region. Please switch to a different region or delete one or more PersistentResources in this region before creating another.
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/grpc_helpers.py:78: FailedPrecondition
The above exception was the direct cause of the following exception:
self =
cluster_ray_version = '2.33'
@pytest.mark.parametrize("cluster_ray_version", ["2.9", "2.33"])
def test_ray_data(self, cluster_ray_version):
head_node_type = vertex_ray.Resources()
worker_node_types = [
vertex_ray.Resources(),
vertex_ray.Resources(),
vertex_ray.Resources(),
]
assert ray.__version__ == RAY_VERSION
aiplatform.init(project=PROJECT_ID, location="us-central1")
timestamp = datetime.datetime.now().strftime("%Y-%m-%d-%H-%M-%S")
# Create cluster, get dashboard address
> cluster_resource_name = vertex_ray.create_ray_cluster(
head_node_type=head_node_type,
worker_node_types=worker_node_types,
cluster_name=f"ray-cluster-{timestamp}-test-ray-data",
ray_version=cluster_ray_version,
)
tests/system/vertex_ray/test_ray_data.py:100:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
head_node_type = Resources(machine_type='n1-standard-16', node_count=1, accelerator_type=None, accelerator_count=0, boot_disk_type='pd-ssd', boot_disk_size_gb=100, custom_image=None, autoscaling_spec=None)
python_version = '3.10', ray_version = '2.33', network = None
service_account = None
cluster_name = 'ray-cluster-2025-02-28-00-13-06-test-ray-data'
worker_node_types = [Resources(machine_type='n1-standard-16', node_count=1, accelerator_type=None, accelerator_count=0, boot_disk_type='pd...e=None, accelerator_count=0, boot_disk_type='pd-ssd', boot_disk_size_gb=100, custom_image=None, autoscaling_spec=None)]
custom_images = None, enable_metrics_collection = True, enable_logging = True
psc_interface_config = None, reserved_ip_ranges = None, nfs_mounts = None
labels = None
def create_ray_cluster(
head_node_type: Optional[resources.Resources] = resources.Resources(),
python_version: Optional[str] = "3.10",
ray_version: Optional[str] = "2.33",
network: Optional[str] = None,
service_account: Optional[str] = None,
cluster_name: Optional[str] = None,
worker_node_types: Optional[List[resources.Resources]] = [resources.Resources()],
custom_images: Optional[resources.NodeImages] = None,
enable_metrics_collection: Optional[bool] = True,
enable_logging: Optional[bool] = True,
psc_interface_config: Optional[resources.PscIConfig] = None,
reserved_ip_ranges: Optional[List[str]] = None,
nfs_mounts: Optional[List[resources.NfsMount]] = None,
labels: Optional[Dict[str, str]] = None,
) -> str:
"""Create a ray cluster on the Vertex AI.
Sample usage:
from vertex_ray import Resources
head_node_type = Resources(
machine_type="n1-standard-8",
node_count=1,
accelerator_type="NVIDIA_TESLA_K80",
accelerator_count=1,
custom_image="us-docker.pkg.dev/my-project/ray-cpu-image.2.9:latest", # Optional
)
worker_node_types = [Resources(
machine_type="n1-standard-8",
node_count=2,
accelerator_type="NVIDIA_TESLA_K80",
accelerator_count=1,
custom_image="us-docker.pkg.dev/my-project/ray-gpu-image.2.9:latest", # Optional
)]
cluster_resource_name = vertex_ray.create_ray_cluster(
head_node_type=head_node_type,
network="projects/my-project-number/global/networks/my-vpc-name", # Optional
service_account="my-service-account@my-project-number.iam.gserviceaccount.com", # Optional
cluster_name="my-cluster-name", # Optional
worker_node_types=worker_node_types,
ray_version="2.9",
)
After a ray cluster is set up, you can call
`ray.init(f"vertex_ray://{cluster_resource_name}", runtime_env=...)` without
specifying ray cluster address to connect to the cluster. To shut down the
cluster you can call `ray.delete_ray_cluster()`.
Note: If the active ray cluster has not finished shutting down, you cannot
create a new ray cluster with the same cluster_name.
Args:
head_node_type: The head node resource. Resources.node_count must be 1.
If not set, default value of Resources() class will be used.
python_version: Python version for the ray cluster.
ray_version: Ray version for the ray cluster. Default is 2.33.0.
network: Virtual private cloud (VPC) network. For Ray Client, VPC
peering is required to connect to the Ray Cluster managed in the
Vertex API service. For Ray Job API, VPC network is not required
because Ray Cluster connection can be accessed through dashboard
address.
service_account: Service account to be used for running Ray programs on
the cluster.
cluster_name: This value may be up to 63 characters, and valid
characters are `[a-z0-9_-]`. The first character cannot be a number
or hyphen.
worker_node_types: The list of Resources of the worker nodes. The same
Resources object should not appear multiple times in the list.
custom_images: The NodeImages which specifies head node and worker nodes
images. All the workers will share the same image. If each Resource
has a specific custom image, use `Resources.custom_image` for
head/worker_node_type(s). Note that configuring `Resources.custom_image`
will override `custom_images` here. Allowlist only.
enable_metrics_collection: Enable Ray metrics collection for visualization.
enable_logging: Enable exporting Ray logs to Cloud Logging.
psc_interface_config: PSC-I config.
reserved_ip_ranges: A list of names for the reserved IP ranges under
the VPC network that can be used for this cluster. If set, we will
deploy the cluster within the provided IP ranges. Otherwise, the
cluster is deployed to any IP ranges under the provided VPC network.
Example: ["vertex-ai-ip-range"].
labels:
The labels with user-defined metadata to organize Ray cluster.
Label keys and values can be no longer than 64 characters (Unicode
codepoints), can only contain lowercase letters, numeric characters,
underscores and dashes. International characters are allowed.
See https://goo.gl/xmQnxf for more information and examples of labels.
Returns:
The cluster_resource_name of the initiated Ray cluster on Vertex.
Raise:
ValueError: If the cluster is not created successfully.
RuntimeError: If the ray_version is 2.4.
"""
if network is None:
logging.info(
"[Ray on Vertex]: No VPC network configured. It is required for client connection."
)
if ray_version == "2.4":
raise RuntimeError(_V2_4_WARNING_MESSAGE)
if ray_version == "2.9.3":
warnings.warn(_V2_9_WARNING_MESSAGE, DeprecationWarning, stacklevel=1)
local_ray_verion = _validation_utils.get_local_ray_version()
if ray_version != local_ray_verion:
if custom_images is None and head_node_type.custom_image is None:
install_ray_version = "2.33.0"
logging.info(
"[Ray on Vertex]: Local runtime has Ray version %s"
", but the requested cluster runtime has %s. Please "
"ensure that the Ray versions match for client connectivity. You may "
'"pip install --user --force-reinstall ray[default]==%s"'
" and restart runtime before cluster connection."
% (local_ray_verion, ray_version, install_ray_version)
)
else:
logging.info(
"[Ray on Vertex]: Local runtime has Ray version %s."
"Please ensure that the Ray versions match for client connectivity."
% local_ray_verion
)
if cluster_name is None:
cluster_name = "ray-cluster-" + utils.timestamped_unique_name()
if head_node_type:
if head_node_type.node_count != 1:
raise ValueError(
"[Ray on Vertex AI]: For head_node_type, "
+ "Resources.node_count must be 1."
)
if head_node_type.autoscaling_spec is not None:
raise ValueError(
"[Ray on Vertex AI]: For head_node_type, "
+ "Resources.autoscaling_spec must be None."
)
if (
head_node_type.accelerator_type is None
and head_node_type.accelerator_count > 0
):
raise ValueError(
"[Ray on Vertex]: accelerator_type must be specified when"
+ " accelerator_count is set to a value other than 0."
)
resource_pool_images = {}
# head node
resource_pool_0 = ResourcePool()
resource_pool_0.id = "head-node"
resource_pool_0.replica_count = head_node_type.node_count
resource_pool_0.machine_spec.machine_type = head_node_type.machine_type
resource_pool_0.machine_spec.accelerator_count = head_node_type.accelerator_count
resource_pool_0.machine_spec.accelerator_type = head_node_type.accelerator_type
resource_pool_0.disk_spec.boot_disk_type = head_node_type.boot_disk_type
resource_pool_0.disk_spec.boot_disk_size_gb = head_node_type.boot_disk_size_gb
enable_cuda = True if head_node_type.accelerator_count > 0 else False
if head_node_type.custom_image is not None:
image_uri = head_node_type.custom_image
elif custom_images is None:
image_uri = _validation_utils.get_image_uri(
ray_version, python_version, enable_cuda
)
elif custom_images.head is not None and custom_images.worker is not None:
image_uri = custom_images.head
else:
raise ValueError(
"[Ray on Vertex AI]: custom_images.head and custom_images.worker must be specified when custom_images is set."
)
resource_pool_images[resource_pool_0.id] = image_uri
worker_pools = []
i = 0
if worker_node_types:
for worker_node_type in worker_node_types:
if (
worker_node_type.accelerator_type is None
and worker_node_type.accelerator_count > 0
):
raise ValueError(
"[Ray on Vertex]: accelerator_type must be specified when"
+ " accelerator_count is set to a value other than 0."
)
additional_replica_count = resources._check_machine_spec_identical(
head_node_type, worker_node_type
)
if worker_node_type.autoscaling_spec is None:
# Worker and head share the same MachineSpec, merge them into the
# same ResourcePool
resource_pool_0.replica_count = (
resource_pool_0.replica_count + additional_replica_count
)
else:
if additional_replica_count > 0:
# Autoscaling for single ResourcePool (homogeneous cluster).
resource_pool_0.replica_count = None
resource_pool_0.autoscaling_spec.min_replica_count = (
worker_node_type.autoscaling_spec.min_replica_count
)
resource_pool_0.autoscaling_spec.max_replica_count = (
worker_node_type.autoscaling_spec.max_replica_count
)
if additional_replica_count == 0:
resource_pool = ResourcePool()
resource_pool.id = f"worker-pool{i+1}"
if worker_node_type.autoscaling_spec is None:
resource_pool.replica_count = worker_node_type.node_count
else:
# Autoscaling for worker ResourcePool.
resource_pool.autoscaling_spec.min_replica_count = (
worker_node_type.autoscaling_spec.min_replica_count
)
resource_pool.autoscaling_spec.max_replica_count = (
worker_node_type.autoscaling_spec.max_replica_count
)
resource_pool.machine_spec.machine_type = worker_node_type.machine_type
resource_pool.machine_spec.accelerator_count = (
worker_node_type.accelerator_count
)
resource_pool.machine_spec.accelerator_type = (
worker_node_type.accelerator_type
)
resource_pool.disk_spec.boot_disk_type = worker_node_type.boot_disk_type
resource_pool.disk_spec.boot_disk_size_gb = (
worker_node_type.boot_disk_size_gb
)
worker_pools.append(resource_pool)
enable_cuda = True if worker_node_type.accelerator_count > 0 else False
if worker_node_type.custom_image is not None:
image_uri = worker_node_type.custom_image
elif custom_images is None:
image_uri = _validation_utils.get_image_uri(
ray_version, python_version, enable_cuda
)
else:
image_uri = custom_images.worker
resource_pool_images[resource_pool.id] = image_uri
i += 1
resource_pools = [resource_pool_0] + worker_pools
metrics_collection_disabled = not enable_metrics_collection
ray_metric_spec = RayMetricSpec(disabled=metrics_collection_disabled)
logging_disabled = not enable_logging
ray_logs_spec = RayLogsSpec(disabled=logging_disabled)
ray_spec = RaySpec(
resource_pool_images=resource_pool_images,
ray_metric_spec=ray_metric_spec,
ray_logs_spec=ray_logs_spec,
)
if nfs_mounts:
gapic_nfs_mounts = []
for nfs_mount in nfs_mounts:
gapic_nfs_mounts.append(
NfsMount(
server=nfs_mount.server,
path=nfs_mount.path,
mount_point=nfs_mount.mount_point,
)
)
ray_spec.nfs_mounts = gapic_nfs_mounts
if service_account:
service_account_spec = ServiceAccountSpec(
enable_custom_service_account=True,
service_account=service_account,
)
resource_runtime_spec = ResourceRuntimeSpec(
ray_spec=ray_spec,
service_account_spec=service_account_spec,
)
else:
resource_runtime_spec = ResourceRuntimeSpec(ray_spec=ray_spec)
if psc_interface_config:
gapic_psc_interface_config = PscInterfaceConfig(
network_attachment=psc_interface_config.network_attachment,
)
else:
gapic_psc_interface_config = None
persistent_resource = PersistentResource(
resource_pools=resource_pools,
network=network,
labels=labels,
resource_runtime_spec=resource_runtime_spec,
psc_interface_config=gapic_psc_interface_config,
reserved_ip_ranges=reserved_ip_ranges,
)
location = initializer.global_config.location
project_id = initializer.global_config.project
project_number = resource_manager_utils.get_project_number(project_id)
parent = f"projects/{project_number}/locations/{location}"
request = persistent_resource_service.CreatePersistentResourceRequest(
parent=parent,
persistent_resource=persistent_resource,
persistent_resource_id=cluster_name,
)
client = _gapic_utils.create_persistent_resource_client()
try:
_ = client.create_persistent_resource(request)
except Exception as e:
> raise ValueError("Failed in cluster creation due to: ", e) from e
E ValueError: ('Failed in cluster creation due to: ', FailedPrecondition('You have already provisioned the maximum number of PersistentResources in this region. Please switch to a different region or delete one or more PersistentResources in this region before creating another.'))
google/cloud/aiplatform/vertex_ray/cluster_init.py:369: ValueError
_____________ TestClusterManagement.test_cluster_management[2.33] ______________
[gw0] linux -- Python 3.10.15 /tmpfs/src/github/python-aiplatform/.nox/system-3-10/bin/python
self =
cluster_ray_version = '2.33'
@pytest.mark.parametrize("cluster_ray_version", ["2.9", "2.33"])
def test_cluster_management(self, cluster_ray_version):
assert ray.__version__ == RAY_VERSION
aiplatform.init(project=PROJECT_ID, location="us-central1")
# CPU default cluster
head_node_type = vertex_ray.Resources()
worker_node_types = [vertex_ray.Resources()]
timestamp = datetime.datetime.now().strftime("%Y-%m-%d-%H-%M-%S")
> cluster_resource_name = vertex_ray.create_ray_cluster(
head_node_type=head_node_type,
worker_node_types=worker_node_types,
cluster_name=f"ray-cluster-{timestamp}-test-cluster-management",
ray_version=cluster_ray_version,
)
tests/system/vertex_ray/test_cluster_management.py:45:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
google/cloud/aiplatform/vertex_ray/cluster_init.py:373: in create_ray_cluster
response = _gapic_utils.get_persistent_resource(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
persistent_resource_name = 'projects/580378083368/locations/us-central1/persistentResources/ray-cluster-2025-02-27-23-45-50-test-cluster-management'
tolerance = 1
def get_persistent_resource(
persistent_resource_name: str, tolerance: Optional[int] = 0
):
"""Get persistent resource.
Args:
persistent_resource_name:
"projects//locations//persistentResources/".
tolerance: number of attemps to get persistent resource.
Returns:
aiplatform_v1.PersistentResource if state is RUNNING.
Raises:
ValueError: Invalid cluster resource name.
RuntimeError: Service returns error.
RuntimeError: Cluster resource state is STOPPING.
RuntimeError: Cluster resource state is ERROR.
"""
client = create_persistent_resource_client()
request = GetPersistentResourceRequest(name=persistent_resource_name)
# TODO(b/277117901): Add test cases for polling and error handling
num_attempts = 0
while True:
try:
response = client.get_persistent_resource(request)
except exceptions.NotFound:
response = None
if num_attempts >= tolerance:
raise ValueError(
"[Ray on Vertex AI]: Invalid cluster_resource_name (404 not found)."
)
if response:
if response.error.message:
logging.error("[Ray on Vertex AI]: %s" % response.error.message)
> raise RuntimeError("[Ray on Vertex AI]: Cluster returned an error.")
E RuntimeError: [Ray on Vertex AI]: Cluster returned an error.
google/cloud/aiplatform/vertex_ray/util/_gapic_utils.py:115: RuntimeError
----------------------------- Captured stdout call -----------------------------
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 1; sleeping for 0:02:30 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 2; sleeping for 0:01:54.750000 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 3; sleeping for 0:01:27.783750 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 4; sleeping for 0:01:07.154569 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 5; sleeping for 0:00:51.373245 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 6; sleeping for 0:00:39.300532 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 7; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 8; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 9; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 10; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 11; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 12; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 13; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 14; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 15; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 16; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 17; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 18; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 19; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 20; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 21; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 22; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 23; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 24; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 25; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 26; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 27; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 28; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 29; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 30; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 31; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 32; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 33; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 34; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 35; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 36; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 37; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 38; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 39; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 40; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 41; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 42; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 43; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 44; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 45; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 46; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 47; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 48; sleeping for 0:00:30.064907 seconds
------------------------------ Captured log call -------------------------------
ERROR root:_gapic_utils.py:114 [Ray on Vertex AI]: An internal error occurred on your cluster. Please try recreating one in a few minutes. If you still experience errors, contact Cloud AI Platform.
________ TestJobSubmissionDashboard.test_job_submission_dashboard[2.9] _________
[gw0] linux -- Python 3.10.15 /tmpfs/src/github/python-aiplatform/.nox/system-3-10/bin/python
args = (parent: "projects/580378083368/locations/us-central1"
persistent_resource {
resource_pools {
id: "head-node"
...s_spec {
}
}
}
}
persistent_resource_id: "ray-cluster-2025-02-28-00-15-34-test-job-submission-dashboard"
,)
kwargs = {'metadata': [('x-goog-request-params', 'parent=projects/580378083368/locations/us-central1'), ('x-goog-api-client', '.../1.82.0+vertex_ray+top_google_constructor_method+google.cloud.aiplatform.vertex_ray.cluster_init.create_ray_cluster')]}
@functools.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
try:
> return callable_(*args, **kwargs)
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/grpc_helpers.py:76:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
.nox/system-3-10/lib/python3.10/site-packages/grpc/_interceptor.py:247: in __call__
response, ignored_call = self._with_call(request,
.nox/system-3-10/lib/python3.10/site-packages/grpc/_interceptor.py:290: in _with_call
return call.result(), call
.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:343: in result
raise self
.nox/system-3-10/lib/python3.10/site-packages/grpc/_interceptor.py:274: in continuation
response, call = self._thunk(new_method).with_call(
.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:957: in with_call
return _end_unary_response_blocking(state, call, True, None)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
state =
call =
with_call = True, deadline = None
def _end_unary_response_blocking(state, call, with_call, deadline):
if state.code is grpc.StatusCode.OK:
if with_call:
rendezvous = _MultiThreadedRendezvous(state, call, None, deadline)
return state.response, rendezvous
else:
return state.response
else:
> raise _InactiveRpcError(state)
E grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
E status = StatusCode.FAILED_PRECONDITION
E details = "You have already provisioned the maximum number of PersistentResources in this region. Please switch to a different region or delete one or more PersistentResources in this region before creating another."
E debug_error_string = "UNKNOWN:Error received from peer ipv4:142.251.188.95:443 {grpc_message:"You have already provisioned the maximum number of PersistentResources in this region. Please switch to a different region or delete one or more PersistentResources in this region before creating another.", grpc_status:9, created_time:"2025-02-28T00:15:34.576884676+00:00"}"
E >
.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:849: _InactiveRpcError
The above exception was the direct cause of the following exception:
head_node_type = Resources(machine_type='n1-standard-16', node_count=1, accelerator_type=None, accelerator_count=0, boot_disk_type='pd-ssd', boot_disk_size_gb=100, custom_image=None, autoscaling_spec=None)
python_version = '3.10', ray_version = '2.9', network = None
service_account = None
cluster_name = 'ray-cluster-2025-02-28-00-15-34-test-job-submission-dashboard'
worker_node_types = [Resources(machine_type='n1-standard-16', node_count=1, accelerator_type=None, accelerator_count=0, boot_disk_type='pd-ssd', boot_disk_size_gb=100, custom_image=None, autoscaling_spec=None)]
custom_images = None, enable_metrics_collection = True, enable_logging = True
psc_interface_config = None, reserved_ip_ranges = None, nfs_mounts = None
labels = None
def create_ray_cluster(
head_node_type: Optional[resources.Resources] = resources.Resources(),
python_version: Optional[str] = "3.10",
ray_version: Optional[str] = "2.33",
network: Optional[str] = None,
service_account: Optional[str] = None,
cluster_name: Optional[str] = None,
worker_node_types: Optional[List[resources.Resources]] = [resources.Resources()],
custom_images: Optional[resources.NodeImages] = None,
enable_metrics_collection: Optional[bool] = True,
enable_logging: Optional[bool] = True,
psc_interface_config: Optional[resources.PscIConfig] = None,
reserved_ip_ranges: Optional[List[str]] = None,
nfs_mounts: Optional[List[resources.NfsMount]] = None,
labels: Optional[Dict[str, str]] = None,
) -> str:
"""Create a ray cluster on the Vertex AI.
Sample usage:
from vertex_ray import Resources
head_node_type = Resources(
machine_type="n1-standard-8",
node_count=1,
accelerator_type="NVIDIA_TESLA_K80",
accelerator_count=1,
custom_image="us-docker.pkg.dev/my-project/ray-cpu-image.2.9:latest", # Optional
)
worker_node_types = [Resources(
machine_type="n1-standard-8",
node_count=2,
accelerator_type="NVIDIA_TESLA_K80",
accelerator_count=1,
custom_image="us-docker.pkg.dev/my-project/ray-gpu-image.2.9:latest", # Optional
)]
cluster_resource_name = vertex_ray.create_ray_cluster(
head_node_type=head_node_type,
network="projects/my-project-number/global/networks/my-vpc-name", # Optional
service_account="my-service-account@my-project-number.iam.gserviceaccount.com", # Optional
cluster_name="my-cluster-name", # Optional
worker_node_types=worker_node_types,
ray_version="2.9",
)
After a ray cluster is set up, you can call
`ray.init(f"vertex_ray://{cluster_resource_name}", runtime_env=...)` without
specifying ray cluster address to connect to the cluster. To shut down the
cluster you can call `ray.delete_ray_cluster()`.
Note: If the active ray cluster has not finished shutting down, you cannot
create a new ray cluster with the same cluster_name.
Args:
head_node_type: The head node resource. Resources.node_count must be 1.
If not set, default value of Resources() class will be used.
python_version: Python version for the ray cluster.
ray_version: Ray version for the ray cluster. Default is 2.33.0.
network: Virtual private cloud (VPC) network. For Ray Client, VPC
peering is required to connect to the Ray Cluster managed in the
Vertex API service. For Ray Job API, VPC network is not required
because Ray Cluster connection can be accessed through dashboard
address.
service_account: Service account to be used for running Ray programs on
the cluster.
cluster_name: This value may be up to 63 characters, and valid
characters are `[a-z0-9_-]`. The first character cannot be a number
or hyphen.
worker_node_types: The list of Resources of the worker nodes. The same
Resources object should not appear multiple times in the list.
custom_images: The NodeImages which specifies head node and worker nodes
images. All the workers will share the same image. If each Resource
has a specific custom image, use `Resources.custom_image` for
head/worker_node_type(s). Note that configuring `Resources.custom_image`
will override `custom_images` here. Allowlist only.
enable_metrics_collection: Enable Ray metrics collection for visualization.
enable_logging: Enable exporting Ray logs to Cloud Logging.
psc_interface_config: PSC-I config.
reserved_ip_ranges: A list of names for the reserved IP ranges under
the VPC network that can be used for this cluster. If set, we will
deploy the cluster within the provided IP ranges. Otherwise, the
cluster is deployed to any IP ranges under the provided VPC network.
Example: ["vertex-ai-ip-range"].
labels:
The labels with user-defined metadata to organize Ray cluster.
Label keys and values can be no longer than 64 characters (Unicode
codepoints), can only contain lowercase letters, numeric characters,
underscores and dashes. International characters are allowed.
See https://goo.gl/xmQnxf for more information and examples of labels.
Returns:
The cluster_resource_name of the initiated Ray cluster on Vertex.
Raise:
ValueError: If the cluster is not created successfully.
RuntimeError: If the ray_version is 2.4.
"""
if network is None:
logging.info(
"[Ray on Vertex]: No VPC network configured. It is required for client connection."
)
if ray_version == "2.4":
raise RuntimeError(_V2_4_WARNING_MESSAGE)
if ray_version == "2.9.3":
warnings.warn(_V2_9_WARNING_MESSAGE, DeprecationWarning, stacklevel=1)
local_ray_verion = _validation_utils.get_local_ray_version()
if ray_version != local_ray_verion:
if custom_images is None and head_node_type.custom_image is None:
install_ray_version = "2.33.0"
logging.info(
"[Ray on Vertex]: Local runtime has Ray version %s"
", but the requested cluster runtime has %s. Please "
"ensure that the Ray versions match for client connectivity. You may "
'"pip install --user --force-reinstall ray[default]==%s"'
" and restart runtime before cluster connection."
% (local_ray_verion, ray_version, install_ray_version)
)
else:
logging.info(
"[Ray on Vertex]: Local runtime has Ray version %s."
"Please ensure that the Ray versions match for client connectivity."
% local_ray_verion
)
if cluster_name is None:
cluster_name = "ray-cluster-" + utils.timestamped_unique_name()
if head_node_type:
if head_node_type.node_count != 1:
raise ValueError(
"[Ray on Vertex AI]: For head_node_type, "
+ "Resources.node_count must be 1."
)
if head_node_type.autoscaling_spec is not None:
raise ValueError(
"[Ray on Vertex AI]: For head_node_type, "
+ "Resources.autoscaling_spec must be None."
)
if (
head_node_type.accelerator_type is None
and head_node_type.accelerator_count > 0
):
raise ValueError(
"[Ray on Vertex]: accelerator_type must be specified when"
+ " accelerator_count is set to a value other than 0."
)
resource_pool_images = {}
# head node
resource_pool_0 = ResourcePool()
resource_pool_0.id = "head-node"
resource_pool_0.replica_count = head_node_type.node_count
resource_pool_0.machine_spec.machine_type = head_node_type.machine_type
resource_pool_0.machine_spec.accelerator_count = head_node_type.accelerator_count
resource_pool_0.machine_spec.accelerator_type = head_node_type.accelerator_type
resource_pool_0.disk_spec.boot_disk_type = head_node_type.boot_disk_type
resource_pool_0.disk_spec.boot_disk_size_gb = head_node_type.boot_disk_size_gb
enable_cuda = True if head_node_type.accelerator_count > 0 else False
if head_node_type.custom_image is not None:
image_uri = head_node_type.custom_image
elif custom_images is None:
image_uri = _validation_utils.get_image_uri(
ray_version, python_version, enable_cuda
)
elif custom_images.head is not None and custom_images.worker is not None:
image_uri = custom_images.head
else:
raise ValueError(
"[Ray on Vertex AI]: custom_images.head and custom_images.worker must be specified when custom_images is set."
)
resource_pool_images[resource_pool_0.id] = image_uri
worker_pools = []
i = 0
if worker_node_types:
for worker_node_type in worker_node_types:
if (
worker_node_type.accelerator_type is None
and worker_node_type.accelerator_count > 0
):
raise ValueError(
"[Ray on Vertex]: accelerator_type must be specified when"
+ " accelerator_count is set to a value other than 0."
)
additional_replica_count = resources._check_machine_spec_identical(
head_node_type, worker_node_type
)
if worker_node_type.autoscaling_spec is None:
# Worker and head share the same MachineSpec, merge them into the
# same ResourcePool
resource_pool_0.replica_count = (
resource_pool_0.replica_count + additional_replica_count
)
else:
if additional_replica_count > 0:
# Autoscaling for single ResourcePool (homogeneous cluster).
resource_pool_0.replica_count = None
resource_pool_0.autoscaling_spec.min_replica_count = (
worker_node_type.autoscaling_spec.min_replica_count
)
resource_pool_0.autoscaling_spec.max_replica_count = (
worker_node_type.autoscaling_spec.max_replica_count
)
if additional_replica_count == 0:
resource_pool = ResourcePool()
resource_pool.id = f"worker-pool{i+1}"
if worker_node_type.autoscaling_spec is None:
resource_pool.replica_count = worker_node_type.node_count
else:
# Autoscaling for worker ResourcePool.
resource_pool.autoscaling_spec.min_replica_count = (
worker_node_type.autoscaling_spec.min_replica_count
)
resource_pool.autoscaling_spec.max_replica_count = (
worker_node_type.autoscaling_spec.max_replica_count
)
resource_pool.machine_spec.machine_type = worker_node_type.machine_type
resource_pool.machine_spec.accelerator_count = (
worker_node_type.accelerator_count
)
resource_pool.machine_spec.accelerator_type = (
worker_node_type.accelerator_type
)
resource_pool.disk_spec.boot_disk_type = worker_node_type.boot_disk_type
resource_pool.disk_spec.boot_disk_size_gb = (
worker_node_type.boot_disk_size_gb
)
worker_pools.append(resource_pool)
enable_cuda = True if worker_node_type.accelerator_count > 0 else False
if worker_node_type.custom_image is not None:
image_uri = worker_node_type.custom_image
elif custom_images is None:
image_uri = _validation_utils.get_image_uri(
ray_version, python_version, enable_cuda
)
else:
image_uri = custom_images.worker
resource_pool_images[resource_pool.id] = image_uri
i += 1
resource_pools = [resource_pool_0] + worker_pools
metrics_collection_disabled = not enable_metrics_collection
ray_metric_spec = RayMetricSpec(disabled=metrics_collection_disabled)
logging_disabled = not enable_logging
ray_logs_spec = RayLogsSpec(disabled=logging_disabled)
ray_spec = RaySpec(
resource_pool_images=resource_pool_images,
ray_metric_spec=ray_metric_spec,
ray_logs_spec=ray_logs_spec,
)
if nfs_mounts:
gapic_nfs_mounts = []
for nfs_mount in nfs_mounts:
gapic_nfs_mounts.append(
NfsMount(
server=nfs_mount.server,
path=nfs_mount.path,
mount_point=nfs_mount.mount_point,
)
)
ray_spec.nfs_mounts = gapic_nfs_mounts
if service_account:
service_account_spec = ServiceAccountSpec(
enable_custom_service_account=True,
service_account=service_account,
)
resource_runtime_spec = ResourceRuntimeSpec(
ray_spec=ray_spec,
service_account_spec=service_account_spec,
)
else:
resource_runtime_spec = ResourceRuntimeSpec(ray_spec=ray_spec)
if psc_interface_config:
gapic_psc_interface_config = PscInterfaceConfig(
network_attachment=psc_interface_config.network_attachment,
)
else:
gapic_psc_interface_config = None
persistent_resource = PersistentResource(
resource_pools=resource_pools,
network=network,
labels=labels,
resource_runtime_spec=resource_runtime_spec,
psc_interface_config=gapic_psc_interface_config,
reserved_ip_ranges=reserved_ip_ranges,
)
location = initializer.global_config.location
project_id = initializer.global_config.project
project_number = resource_manager_utils.get_project_number(project_id)
parent = f"projects/{project_number}/locations/{location}"
request = persistent_resource_service.CreatePersistentResourceRequest(
parent=parent,
persistent_resource=persistent_resource,
persistent_resource_id=cluster_name,
)
client = _gapic_utils.create_persistent_resource_client()
try:
> _ = client.create_persistent_resource(request)
google/cloud/aiplatform/vertex_ray/cluster_init.py:367:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
google/cloud/aiplatform_v1beta1/services/persistent_resource_service/client.py:1006: in create_persistent_resource
response = rpc(
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/gapic_v1/method.py:131: in __call__
return wrapped_func(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
args = (parent: "projects/580378083368/locations/us-central1"
persistent_resource {
resource_pools {
id: "head-node"
...s_spec {
}
}
}
}
persistent_resource_id: "ray-cluster-2025-02-28-00-15-34-test-job-submission-dashboard"
,)
kwargs = {'metadata': [('x-goog-request-params', 'parent=projects/580378083368/locations/us-central1'), ('x-goog-api-client', '.../1.82.0+vertex_ray+top_google_constructor_method+google.cloud.aiplatform.vertex_ray.cluster_init.create_ray_cluster')]}
@functools.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
try:
return callable_(*args, **kwargs)
except grpc.RpcError as exc:
> raise exceptions.from_grpc_error(exc) from exc
E google.api_core.exceptions.FailedPrecondition: 400 You have already provisioned the maximum number of PersistentResources in this region. Please switch to a different region or delete one or more PersistentResources in this region before creating another.
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/grpc_helpers.py:78: FailedPrecondition
The above exception was the direct cause of the following exception:
self =
cluster_ray_version = '2.9'
@pytest.mark.parametrize("cluster_ray_version", ["2.9", "2.33"])
def test_job_submission_dashboard(self, cluster_ray_version):
assert ray.__version__ == RAY_VERSION
aiplatform.init(project=PROJECT_ID, location="us-central1")
head_node_type = vertex_ray.Resources()
worker_node_types = [vertex_ray.Resources()]
timestamp = datetime.datetime.now().strftime("%Y-%m-%d-%H-%M-%S")
# Create cluster, get dashboard address
> cluster_resource_name = vertex_ray.create_ray_cluster(
head_node_type=head_node_type,
worker_node_types=worker_node_types,
cluster_name=f"ray-cluster-{timestamp}-test-job-submission-dashboard",
ray_version=cluster_ray_version,
)
tests/system/vertex_ray/test_job_submission_dashboard.py:49:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
head_node_type = Resources(machine_type='n1-standard-16', node_count=1, accelerator_type=None, accelerator_count=0, boot_disk_type='pd-ssd', boot_disk_size_gb=100, custom_image=None, autoscaling_spec=None)
python_version = '3.10', ray_version = '2.9', network = None
service_account = None
cluster_name = 'ray-cluster-2025-02-28-00-15-34-test-job-submission-dashboard'
worker_node_types = [Resources(machine_type='n1-standard-16', node_count=1, accelerator_type=None, accelerator_count=0, boot_disk_type='pd-ssd', boot_disk_size_gb=100, custom_image=None, autoscaling_spec=None)]
custom_images = None, enable_metrics_collection = True, enable_logging = True
psc_interface_config = None, reserved_ip_ranges = None, nfs_mounts = None
labels = None
def create_ray_cluster(
head_node_type: Optional[resources.Resources] = resources.Resources(),
python_version: Optional[str] = "3.10",
ray_version: Optional[str] = "2.33",
network: Optional[str] = None,
service_account: Optional[str] = None,
cluster_name: Optional[str] = None,
worker_node_types: Optional[List[resources.Resources]] = [resources.Resources()],
custom_images: Optional[resources.NodeImages] = None,
enable_metrics_collection: Optional[bool] = True,
enable_logging: Optional[bool] = True,
psc_interface_config: Optional[resources.PscIConfig] = None,
reserved_ip_ranges: Optional[List[str]] = None,
nfs_mounts: Optional[List[resources.NfsMount]] = None,
labels: Optional[Dict[str, str]] = None,
) -> str:
"""Create a ray cluster on the Vertex AI.
Sample usage:
from vertex_ray import Resources
head_node_type = Resources(
machine_type="n1-standard-8",
node_count=1,
accelerator_type="NVIDIA_TESLA_K80",
accelerator_count=1,
custom_image="us-docker.pkg.dev/my-project/ray-cpu-image.2.9:latest", # Optional
)
worker_node_types = [Resources(
machine_type="n1-standard-8",
node_count=2,
accelerator_type="NVIDIA_TESLA_K80",
accelerator_count=1,
custom_image="us-docker.pkg.dev/my-project/ray-gpu-image.2.9:latest", # Optional
)]
cluster_resource_name = vertex_ray.create_ray_cluster(
head_node_type=head_node_type,
network="projects/my-project-number/global/networks/my-vpc-name", # Optional
service_account="my-service-account@my-project-number.iam.gserviceaccount.com", # Optional
cluster_name="my-cluster-name", # Optional
worker_node_types=worker_node_types,
ray_version="2.9",
)
After a ray cluster is set up, you can call
`ray.init(f"vertex_ray://{cluster_resource_name}", runtime_env=...)` without
specifying ray cluster address to connect to the cluster. To shut down the
cluster you can call `ray.delete_ray_cluster()`.
Note: If the active ray cluster has not finished shutting down, you cannot
create a new ray cluster with the same cluster_name.
Args:
head_node_type: The head node resource. Resources.node_count must be 1.
If not set, default value of Resources() class will be used.
python_version: Python version for the ray cluster.
ray_version: Ray version for the ray cluster. Default is 2.33.0.
network: Virtual private cloud (VPC) network. For Ray Client, VPC
peering is required to connect to the Ray Cluster managed in the
Vertex API service. For Ray Job API, VPC network is not required
because Ray Cluster connection can be accessed through dashboard
address.
service_account: Service account to be used for running Ray programs on
the cluster.
cluster_name: This value may be up to 63 characters, and valid
characters are `[a-z0-9_-]`. The first character cannot be a number
or hyphen.
worker_node_types: The list of Resources of the worker nodes. The same
Resources object should not appear multiple times in the list.
custom_images: The NodeImages which specifies head node and worker nodes
images. All the workers will share the same image. If each Resource
has a specific custom image, use `Resources.custom_image` for
head/worker_node_type(s). Note that configuring `Resources.custom_image`
will override `custom_images` here. Allowlist only.
enable_metrics_collection: Enable Ray metrics collection for visualization.
enable_logging: Enable exporting Ray logs to Cloud Logging.
psc_interface_config: PSC-I config.
reserved_ip_ranges: A list of names for the reserved IP ranges under
the VPC network that can be used for this cluster. If set, we will
deploy the cluster within the provided IP ranges. Otherwise, the
cluster is deployed to any IP ranges under the provided VPC network.
Example: ["vertex-ai-ip-range"].
labels:
The labels with user-defined metadata to organize Ray cluster.
Label keys and values can be no longer than 64 characters (Unicode
codepoints), can only contain lowercase letters, numeric characters,
underscores and dashes. International characters are allowed.
See https://goo.gl/xmQnxf for more information and examples of labels.
Returns:
The cluster_resource_name of the initiated Ray cluster on Vertex.
Raise:
ValueError: If the cluster is not created successfully.
RuntimeError: If the ray_version is 2.4.
"""
if network is None:
logging.info(
"[Ray on Vertex]: No VPC network configured. It is required for client connection."
)
if ray_version == "2.4":
raise RuntimeError(_V2_4_WARNING_MESSAGE)
if ray_version == "2.9.3":
warnings.warn(_V2_9_WARNING_MESSAGE, DeprecationWarning, stacklevel=1)
local_ray_verion = _validation_utils.get_local_ray_version()
if ray_version != local_ray_verion:
if custom_images is None and head_node_type.custom_image is None:
install_ray_version = "2.33.0"
logging.info(
"[Ray on Vertex]: Local runtime has Ray version %s"
", but the requested cluster runtime has %s. Please "
"ensure that the Ray versions match for client connectivity. You may "
'"pip install --user --force-reinstall ray[default]==%s"'
" and restart runtime before cluster connection."
% (local_ray_verion, ray_version, install_ray_version)
)
else:
logging.info(
"[Ray on Vertex]: Local runtime has Ray version %s."
"Please ensure that the Ray versions match for client connectivity."
% local_ray_verion
)
if cluster_name is None:
cluster_name = "ray-cluster-" + utils.timestamped_unique_name()
if head_node_type:
if head_node_type.node_count != 1:
raise ValueError(
"[Ray on Vertex AI]: For head_node_type, "
+ "Resources.node_count must be 1."
)
if head_node_type.autoscaling_spec is not None:
raise ValueError(
"[Ray on Vertex AI]: For head_node_type, "
+ "Resources.autoscaling_spec must be None."
)
if (
head_node_type.accelerator_type is None
and head_node_type.accelerator_count > 0
):
raise ValueError(
"[Ray on Vertex]: accelerator_type must be specified when"
+ " accelerator_count is set to a value other than 0."
)
resource_pool_images = {}
# head node
resource_pool_0 = ResourcePool()
resource_pool_0.id = "head-node"
resource_pool_0.replica_count = head_node_type.node_count
resource_pool_0.machine_spec.machine_type = head_node_type.machine_type
resource_pool_0.machine_spec.accelerator_count = head_node_type.accelerator_count
resource_pool_0.machine_spec.accelerator_type = head_node_type.accelerator_type
resource_pool_0.disk_spec.boot_disk_type = head_node_type.boot_disk_type
resource_pool_0.disk_spec.boot_disk_size_gb = head_node_type.boot_disk_size_gb
enable_cuda = True if head_node_type.accelerator_count > 0 else False
if head_node_type.custom_image is not None:
image_uri = head_node_type.custom_image
elif custom_images is None:
image_uri = _validation_utils.get_image_uri(
ray_version, python_version, enable_cuda
)
elif custom_images.head is not None and custom_images.worker is not None:
image_uri = custom_images.head
else:
raise ValueError(
"[Ray on Vertex AI]: custom_images.head and custom_images.worker must be specified when custom_images is set."
)
resource_pool_images[resource_pool_0.id] = image_uri
worker_pools = []
i = 0
if worker_node_types:
for worker_node_type in worker_node_types:
if (
worker_node_type.accelerator_type is None
and worker_node_type.accelerator_count > 0
):
raise ValueError(
"[Ray on Vertex]: accelerator_type must be specified when"
+ " accelerator_count is set to a value other than 0."
)
additional_replica_count = resources._check_machine_spec_identical(
head_node_type, worker_node_type
)
if worker_node_type.autoscaling_spec is None:
# Worker and head share the same MachineSpec, merge them into the
# same ResourcePool
resource_pool_0.replica_count = (
resource_pool_0.replica_count + additional_replica_count
)
else:
if additional_replica_count > 0:
# Autoscaling for single ResourcePool (homogeneous cluster).
resource_pool_0.replica_count = None
resource_pool_0.autoscaling_spec.min_replica_count = (
worker_node_type.autoscaling_spec.min_replica_count
)
resource_pool_0.autoscaling_spec.max_replica_count = (
worker_node_type.autoscaling_spec.max_replica_count
)
if additional_replica_count == 0:
resource_pool = ResourcePool()
resource_pool.id = f"worker-pool{i+1}"
if worker_node_type.autoscaling_spec is None:
resource_pool.replica_count = worker_node_type.node_count
else:
# Autoscaling for worker ResourcePool.
resource_pool.autoscaling_spec.min_replica_count = (
worker_node_type.autoscaling_spec.min_replica_count
)
resource_pool.autoscaling_spec.max_replica_count = (
worker_node_type.autoscaling_spec.max_replica_count
)
resource_pool.machine_spec.machine_type = worker_node_type.machine_type
resource_pool.machine_spec.accelerator_count = (
worker_node_type.accelerator_count
)
resource_pool.machine_spec.accelerator_type = (
worker_node_type.accelerator_type
)
resource_pool.disk_spec.boot_disk_type = worker_node_type.boot_disk_type
resource_pool.disk_spec.boot_disk_size_gb = (
worker_node_type.boot_disk_size_gb
)
worker_pools.append(resource_pool)
enable_cuda = True if worker_node_type.accelerator_count > 0 else False
if worker_node_type.custom_image is not None:
image_uri = worker_node_type.custom_image
elif custom_images is None:
image_uri = _validation_utils.get_image_uri(
ray_version, python_version, enable_cuda
)
else:
image_uri = custom_images.worker
resource_pool_images[resource_pool.id] = image_uri
i += 1
resource_pools = [resource_pool_0] + worker_pools
metrics_collection_disabled = not enable_metrics_collection
ray_metric_spec = RayMetricSpec(disabled=metrics_collection_disabled)
logging_disabled = not enable_logging
ray_logs_spec = RayLogsSpec(disabled=logging_disabled)
ray_spec = RaySpec(
resource_pool_images=resource_pool_images,
ray_metric_spec=ray_metric_spec,
ray_logs_spec=ray_logs_spec,
)
if nfs_mounts:
gapic_nfs_mounts = []
for nfs_mount in nfs_mounts:
gapic_nfs_mounts.append(
NfsMount(
server=nfs_mount.server,
path=nfs_mount.path,
mount_point=nfs_mount.mount_point,
)
)
ray_spec.nfs_mounts = gapic_nfs_mounts
if service_account:
service_account_spec = ServiceAccountSpec(
enable_custom_service_account=True,
service_account=service_account,
)
resource_runtime_spec = ResourceRuntimeSpec(
ray_spec=ray_spec,
service_account_spec=service_account_spec,
)
else:
resource_runtime_spec = ResourceRuntimeSpec(ray_spec=ray_spec)
if psc_interface_config:
gapic_psc_interface_config = PscInterfaceConfig(
network_attachment=psc_interface_config.network_attachment,
)
else:
gapic_psc_interface_config = None
persistent_resource = PersistentResource(
resource_pools=resource_pools,
network=network,
labels=labels,
resource_runtime_spec=resource_runtime_spec,
psc_interface_config=gapic_psc_interface_config,
reserved_ip_ranges=reserved_ip_ranges,
)
location = initializer.global_config.location
project_id = initializer.global_config.project
project_number = resource_manager_utils.get_project_number(project_id)
parent = f"projects/{project_number}/locations/{location}"
request = persistent_resource_service.CreatePersistentResourceRequest(
parent=parent,
persistent_resource=persistent_resource,
persistent_resource_id=cluster_name,
)
client = _gapic_utils.create_persistent_resource_client()
try:
_ = client.create_persistent_resource(request)
except Exception as e:
> raise ValueError("Failed in cluster creation due to: ", e) from e
E ValueError: ('Failed in cluster creation due to: ', FailedPrecondition('You have already provisioned the maximum number of PersistentResources in this region. Please switch to a different region or delete one or more PersistentResources in this region before creating another.'))
google/cloud/aiplatform/vertex_ray/cluster_init.py:369: ValueError
________ TestJobSubmissionDashboard.test_job_submission_dashboard[2.33] ________
[gw0] linux -- Python 3.10.15 /tmpfs/src/github/python-aiplatform/.nox/system-3-10/bin/python
args = (parent: "projects/580378083368/locations/us-central1"
persistent_resource {
resource_pools {
id: "head-node"
...s_spec {
}
}
}
}
persistent_resource_id: "ray-cluster-2025-02-28-00-15-34-test-job-submission-dashboard"
,)
kwargs = {'metadata': [('x-goog-request-params', 'parent=projects/580378083368/locations/us-central1'), ('x-goog-api-client', '.../1.82.0+vertex_ray+top_google_constructor_method+google.cloud.aiplatform.vertex_ray.cluster_init.create_ray_cluster')]}
@functools.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
try:
> return callable_(*args, **kwargs)
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/grpc_helpers.py:76:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
.nox/system-3-10/lib/python3.10/site-packages/grpc/_interceptor.py:247: in __call__
response, ignored_call = self._with_call(request,
.nox/system-3-10/lib/python3.10/site-packages/grpc/_interceptor.py:290: in _with_call
return call.result(), call
.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:343: in result
raise self
.nox/system-3-10/lib/python3.10/site-packages/grpc/_interceptor.py:274: in continuation
response, call = self._thunk(new_method).with_call(
.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:957: in with_call
return _end_unary_response_blocking(state, call, True, None)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
state =
call =
with_call = True, deadline = None
def _end_unary_response_blocking(state, call, with_call, deadline):
if state.code is grpc.StatusCode.OK:
if with_call:
rendezvous = _MultiThreadedRendezvous(state, call, None, deadline)
return state.response, rendezvous
else:
return state.response
else:
> raise _InactiveRpcError(state)
E grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
E status = StatusCode.FAILED_PRECONDITION
E details = "You have already provisioned the maximum number of PersistentResources in this region. Please switch to a different region or delete one or more PersistentResources in this region before creating another."
E debug_error_string = "UNKNOWN:Error received from peer ipv4:142.251.188.95:443 {created_time:"2025-02-28T00:15:35.033907565+00:00", grpc_status:9, grpc_message:"You have already provisioned the maximum number of PersistentResources in this region. Please switch to a different region or delete one or more PersistentResources in this region before creating another."}"
E >
.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:849: _InactiveRpcError
The above exception was the direct cause of the following exception:
head_node_type = Resources(machine_type='n1-standard-16', node_count=1, accelerator_type=None, accelerator_count=0, boot_disk_type='pd-ssd', boot_disk_size_gb=100, custom_image=None, autoscaling_spec=None)
python_version = '3.10', ray_version = '2.33', network = None
service_account = None
cluster_name = 'ray-cluster-2025-02-28-00-15-34-test-job-submission-dashboard'
worker_node_types = [Resources(machine_type='n1-standard-16', node_count=1, accelerator_type=None, accelerator_count=0, boot_disk_type='pd-ssd', boot_disk_size_gb=100, custom_image=None, autoscaling_spec=None)]
custom_images = None, enable_metrics_collection = True, enable_logging = True
psc_interface_config = None, reserved_ip_ranges = None, nfs_mounts = None
labels = None
def create_ray_cluster(
head_node_type: Optional[resources.Resources] = resources.Resources(),
python_version: Optional[str] = "3.10",
ray_version: Optional[str] = "2.33",
network: Optional[str] = None,
service_account: Optional[str] = None,
cluster_name: Optional[str] = None,
worker_node_types: Optional[List[resources.Resources]] = [resources.Resources()],
custom_images: Optional[resources.NodeImages] = None,
enable_metrics_collection: Optional[bool] = True,
enable_logging: Optional[bool] = True,
psc_interface_config: Optional[resources.PscIConfig] = None,
reserved_ip_ranges: Optional[List[str]] = None,
nfs_mounts: Optional[List[resources.NfsMount]] = None,
labels: Optional[Dict[str, str]] = None,
) -> str:
"""Create a ray cluster on the Vertex AI.
Sample usage:
from vertex_ray import Resources
head_node_type = Resources(
machine_type="n1-standard-8",
node_count=1,
accelerator_type="NVIDIA_TESLA_K80",
accelerator_count=1,
custom_image="us-docker.pkg.dev/my-project/ray-cpu-image.2.9:latest", # Optional
)
worker_node_types = [Resources(
machine_type="n1-standard-8",
node_count=2,
accelerator_type="NVIDIA_TESLA_K80",
accelerator_count=1,
custom_image="us-docker.pkg.dev/my-project/ray-gpu-image.2.9:latest", # Optional
)]
cluster_resource_name = vertex_ray.create_ray_cluster(
head_node_type=head_node_type,
network="projects/my-project-number/global/networks/my-vpc-name", # Optional
service_account="my-service-account@my-project-number.iam.gserviceaccount.com", # Optional
cluster_name="my-cluster-name", # Optional
worker_node_types=worker_node_types,
ray_version="2.9",
)
After a ray cluster is set up, you can call
`ray.init(f"vertex_ray://{cluster_resource_name}", runtime_env=...)` without
specifying ray cluster address to connect to the cluster. To shut down the
cluster you can call `ray.delete_ray_cluster()`.
Note: If the active ray cluster has not finished shutting down, you cannot
create a new ray cluster with the same cluster_name.
Args:
head_node_type: The head node resource. Resources.node_count must be 1.
If not set, default value of Resources() class will be used.
python_version: Python version for the ray cluster.
ray_version: Ray version for the ray cluster. Default is 2.33.0.
network: Virtual private cloud (VPC) network. For Ray Client, VPC
peering is required to connect to the Ray Cluster managed in the
Vertex API service. For Ray Job API, VPC network is not required
because Ray Cluster connection can be accessed through dashboard
address.
service_account: Service account to be used for running Ray programs on
the cluster.
cluster_name: This value may be up to 63 characters, and valid
characters are `[a-z0-9_-]`. The first character cannot be a number
or hyphen.
worker_node_types: The list of Resources of the worker nodes. The same
Resources object should not appear multiple times in the list.
custom_images: The NodeImages which specifies head node and worker nodes
images. All the workers will share the same image. If each Resource
has a specific custom image, use `Resources.custom_image` for
head/worker_node_type(s). Note that configuring `Resources.custom_image`
will override `custom_images` here. Allowlist only.
enable_metrics_collection: Enable Ray metrics collection for visualization.
enable_logging: Enable exporting Ray logs to Cloud Logging.
psc_interface_config: PSC-I config.
reserved_ip_ranges: A list of names for the reserved IP ranges under
the VPC network that can be used for this cluster. If set, we will
deploy the cluster within the provided IP ranges. Otherwise, the
cluster is deployed to any IP ranges under the provided VPC network.
Example: ["vertex-ai-ip-range"].
labels:
The labels with user-defined metadata to organize Ray cluster.
Label keys and values can be no longer than 64 characters (Unicode
codepoints), can only contain lowercase letters, numeric characters,
underscores and dashes. International characters are allowed.
See https://goo.gl/xmQnxf for more information and examples of labels.
Returns:
The cluster_resource_name of the initiated Ray cluster on Vertex.
Raise:
ValueError: If the cluster is not created successfully.
RuntimeError: If the ray_version is 2.4.
"""
if network is None:
logging.info(
"[Ray on Vertex]: No VPC network configured. It is required for client connection."
)
if ray_version == "2.4":
raise RuntimeError(_V2_4_WARNING_MESSAGE)
if ray_version == "2.9.3":
warnings.warn(_V2_9_WARNING_MESSAGE, DeprecationWarning, stacklevel=1)
local_ray_verion = _validation_utils.get_local_ray_version()
if ray_version != local_ray_verion:
if custom_images is None and head_node_type.custom_image is None:
install_ray_version = "2.33.0"
logging.info(
"[Ray on Vertex]: Local runtime has Ray version %s"
", but the requested cluster runtime has %s. Please "
"ensure that the Ray versions match for client connectivity. You may "
'"pip install --user --force-reinstall ray[default]==%s"'
" and restart runtime before cluster connection."
% (local_ray_verion, ray_version, install_ray_version)
)
else:
logging.info(
"[Ray on Vertex]: Local runtime has Ray version %s."
"Please ensure that the Ray versions match for client connectivity."
% local_ray_verion
)
if cluster_name is None:
cluster_name = "ray-cluster-" + utils.timestamped_unique_name()
if head_node_type:
if head_node_type.node_count != 1:
raise ValueError(
"[Ray on Vertex AI]: For head_node_type, "
+ "Resources.node_count must be 1."
)
if head_node_type.autoscaling_spec is not None:
raise ValueError(
"[Ray on Vertex AI]: For head_node_type, "
+ "Resources.autoscaling_spec must be None."
)
if (
head_node_type.accelerator_type is None
and head_node_type.accelerator_count > 0
):
raise ValueError(
"[Ray on Vertex]: accelerator_type must be specified when"
+ " accelerator_count is set to a value other than 0."
)
resource_pool_images = {}
# head node
resource_pool_0 = ResourcePool()
resource_pool_0.id = "head-node"
resource_pool_0.replica_count = head_node_type.node_count
resource_pool_0.machine_spec.machine_type = head_node_type.machine_type
resource_pool_0.machine_spec.accelerator_count = head_node_type.accelerator_count
resource_pool_0.machine_spec.accelerator_type = head_node_type.accelerator_type
resource_pool_0.disk_spec.boot_disk_type = head_node_type.boot_disk_type
resource_pool_0.disk_spec.boot_disk_size_gb = head_node_type.boot_disk_size_gb
enable_cuda = True if head_node_type.accelerator_count > 0 else False
if head_node_type.custom_image is not None:
image_uri = head_node_type.custom_image
elif custom_images is None:
image_uri = _validation_utils.get_image_uri(
ray_version, python_version, enable_cuda
)
elif custom_images.head is not None and custom_images.worker is not None:
image_uri = custom_images.head
else:
raise ValueError(
"[Ray on Vertex AI]: custom_images.head and custom_images.worker must be specified when custom_images is set."
)
resource_pool_images[resource_pool_0.id] = image_uri
worker_pools = []
i = 0
if worker_node_types:
for worker_node_type in worker_node_types:
if (
worker_node_type.accelerator_type is None
and worker_node_type.accelerator_count > 0
):
raise ValueError(
"[Ray on Vertex]: accelerator_type must be specified when"
+ " accelerator_count is set to a value other than 0."
)
additional_replica_count = resources._check_machine_spec_identical(
head_node_type, worker_node_type
)
if worker_node_type.autoscaling_spec is None:
# Worker and head share the same MachineSpec, merge them into the
# same ResourcePool
resource_pool_0.replica_count = (
resource_pool_0.replica_count + additional_replica_count
)
else:
if additional_replica_count > 0:
# Autoscaling for single ResourcePool (homogeneous cluster).
resource_pool_0.replica_count = None
resource_pool_0.autoscaling_spec.min_replica_count = (
worker_node_type.autoscaling_spec.min_replica_count
)
resource_pool_0.autoscaling_spec.max_replica_count = (
worker_node_type.autoscaling_spec.max_replica_count
)
if additional_replica_count == 0:
resource_pool = ResourcePool()
resource_pool.id = f"worker-pool{i+1}"
if worker_node_type.autoscaling_spec is None:
resource_pool.replica_count = worker_node_type.node_count
else:
# Autoscaling for worker ResourcePool.
resource_pool.autoscaling_spec.min_replica_count = (
worker_node_type.autoscaling_spec.min_replica_count
)
resource_pool.autoscaling_spec.max_replica_count = (
worker_node_type.autoscaling_spec.max_replica_count
)
resource_pool.machine_spec.machine_type = worker_node_type.machine_type
resource_pool.machine_spec.accelerator_count = (
worker_node_type.accelerator_count
)
resource_pool.machine_spec.accelerator_type = (
worker_node_type.accelerator_type
)
resource_pool.disk_spec.boot_disk_type = worker_node_type.boot_disk_type
resource_pool.disk_spec.boot_disk_size_gb = (
worker_node_type.boot_disk_size_gb
)
worker_pools.append(resource_pool)
enable_cuda = True if worker_node_type.accelerator_count > 0 else False
if worker_node_type.custom_image is not None:
image_uri = worker_node_type.custom_image
elif custom_images is None:
image_uri = _validation_utils.get_image_uri(
ray_version, python_version, enable_cuda
)
else:
image_uri = custom_images.worker
resource_pool_images[resource_pool.id] = image_uri
i += 1
resource_pools = [resource_pool_0] + worker_pools
metrics_collection_disabled = not enable_metrics_collection
ray_metric_spec = RayMetricSpec(disabled=metrics_collection_disabled)
logging_disabled = not enable_logging
ray_logs_spec = RayLogsSpec(disabled=logging_disabled)
ray_spec = RaySpec(
resource_pool_images=resource_pool_images,
ray_metric_spec=ray_metric_spec,
ray_logs_spec=ray_logs_spec,
)
if nfs_mounts:
gapic_nfs_mounts = []
for nfs_mount in nfs_mounts:
gapic_nfs_mounts.append(
NfsMount(
server=nfs_mount.server,
path=nfs_mount.path,
mount_point=nfs_mount.mount_point,
)
)
ray_spec.nfs_mounts = gapic_nfs_mounts
if service_account:
service_account_spec = ServiceAccountSpec(
enable_custom_service_account=True,
service_account=service_account,
)
resource_runtime_spec = ResourceRuntimeSpec(
ray_spec=ray_spec,
service_account_spec=service_account_spec,
)
else:
resource_runtime_spec = ResourceRuntimeSpec(ray_spec=ray_spec)
if psc_interface_config:
gapic_psc_interface_config = PscInterfaceConfig(
network_attachment=psc_interface_config.network_attachment,
)
else:
gapic_psc_interface_config = None
persistent_resource = PersistentResource(
resource_pools=resource_pools,
network=network,
labels=labels,
resource_runtime_spec=resource_runtime_spec,
psc_interface_config=gapic_psc_interface_config,
reserved_ip_ranges=reserved_ip_ranges,
)
location = initializer.global_config.location
project_id = initializer.global_config.project
project_number = resource_manager_utils.get_project_number(project_id)
parent = f"projects/{project_number}/locations/{location}"
request = persistent_resource_service.CreatePersistentResourceRequest(
parent=parent,
persistent_resource=persistent_resource,
persistent_resource_id=cluster_name,
)
client = _gapic_utils.create_persistent_resource_client()
try:
> _ = client.create_persistent_resource(request)
google/cloud/aiplatform/vertex_ray/cluster_init.py:367:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
google/cloud/aiplatform_v1beta1/services/persistent_resource_service/client.py:1006: in create_persistent_resource
response = rpc(
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/gapic_v1/method.py:131: in __call__
return wrapped_func(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
args = (parent: "projects/580378083368/locations/us-central1"
persistent_resource {
resource_pools {
id: "head-node"
...s_spec {
}
}
}
}
persistent_resource_id: "ray-cluster-2025-02-28-00-15-34-test-job-submission-dashboard"
,)
kwargs = {'metadata': [('x-goog-request-params', 'parent=projects/580378083368/locations/us-central1'), ('x-goog-api-client', '.../1.82.0+vertex_ray+top_google_constructor_method+google.cloud.aiplatform.vertex_ray.cluster_init.create_ray_cluster')]}
@functools.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
try:
return callable_(*args, **kwargs)
except grpc.RpcError as exc:
> raise exceptions.from_grpc_error(exc) from exc
E google.api_core.exceptions.FailedPrecondition: 400 You have already provisioned the maximum number of PersistentResources in this region. Please switch to a different region or delete one or more PersistentResources in this region before creating another.
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/grpc_helpers.py:78: FailedPrecondition
The above exception was the direct cause of the following exception:
self =
cluster_ray_version = '2.33'
@pytest.mark.parametrize("cluster_ray_version", ["2.9", "2.33"])
def test_job_submission_dashboard(self, cluster_ray_version):
assert ray.__version__ == RAY_VERSION
aiplatform.init(project=PROJECT_ID, location="us-central1")
head_node_type = vertex_ray.Resources()
worker_node_types = [vertex_ray.Resources()]
timestamp = datetime.datetime.now().strftime("%Y-%m-%d-%H-%M-%S")
# Create cluster, get dashboard address
> cluster_resource_name = vertex_ray.create_ray_cluster(
head_node_type=head_node_type,
worker_node_types=worker_node_types,
cluster_name=f"ray-cluster-{timestamp}-test-job-submission-dashboard",
ray_version=cluster_ray_version,
)
tests/system/vertex_ray/test_job_submission_dashboard.py:49:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
head_node_type = Resources(machine_type='n1-standard-16', node_count=1, accelerator_type=None, accelerator_count=0, boot_disk_type='pd-ssd', boot_disk_size_gb=100, custom_image=None, autoscaling_spec=None)
python_version = '3.10', ray_version = '2.33', network = None
service_account = None
cluster_name = 'ray-cluster-2025-02-28-00-15-34-test-job-submission-dashboard'
worker_node_types = [Resources(machine_type='n1-standard-16', node_count=1, accelerator_type=None, accelerator_count=0, boot_disk_type='pd-ssd', boot_disk_size_gb=100, custom_image=None, autoscaling_spec=None)]
custom_images = None, enable_metrics_collection = True, enable_logging = True
psc_interface_config = None, reserved_ip_ranges = None, nfs_mounts = None
labels = None
def create_ray_cluster(
head_node_type: Optional[resources.Resources] = resources.Resources(),
python_version: Optional[str] = "3.10",
ray_version: Optional[str] = "2.33",
network: Optional[str] = None,
service_account: Optional[str] = None,
cluster_name: Optional[str] = None,
worker_node_types: Optional[List[resources.Resources]] = [resources.Resources()],
custom_images: Optional[resources.NodeImages] = None,
enable_metrics_collection: Optional[bool] = True,
enable_logging: Optional[bool] = True,
psc_interface_config: Optional[resources.PscIConfig] = None,
reserved_ip_ranges: Optional[List[str]] = None,
nfs_mounts: Optional[List[resources.NfsMount]] = None,
labels: Optional[Dict[str, str]] = None,
) -> str:
"""Create a ray cluster on the Vertex AI.
Sample usage:
from vertex_ray import Resources
head_node_type = Resources(
machine_type="n1-standard-8",
node_count=1,
accelerator_type="NVIDIA_TESLA_K80",
accelerator_count=1,
custom_image="us-docker.pkg.dev/my-project/ray-cpu-image.2.9:latest", # Optional
)
worker_node_types = [Resources(
machine_type="n1-standard-8",
node_count=2,
accelerator_type="NVIDIA_TESLA_K80",
accelerator_count=1,
custom_image="us-docker.pkg.dev/my-project/ray-gpu-image.2.9:latest", # Optional
)]
cluster_resource_name = vertex_ray.create_ray_cluster(
head_node_type=head_node_type,
network="projects/my-project-number/global/networks/my-vpc-name", # Optional
service_account="my-service-account@my-project-number.iam.gserviceaccount.com", # Optional
cluster_name="my-cluster-name", # Optional
worker_node_types=worker_node_types,
ray_version="2.9",
)
After a ray cluster is set up, you can call
`ray.init(f"vertex_ray://{cluster_resource_name}", runtime_env=...)` without
specifying ray cluster address to connect to the cluster. To shut down the
cluster you can call `ray.delete_ray_cluster()`.
Note: If the active ray cluster has not finished shutting down, you cannot
create a new ray cluster with the same cluster_name.
Args:
head_node_type: The head node resource. Resources.node_count must be 1.
If not set, default value of Resources() class will be used.
python_version: Python version for the ray cluster.
ray_version: Ray version for the ray cluster. Default is 2.33.0.
network: Virtual private cloud (VPC) network. For Ray Client, VPC
peering is required to connect to the Ray Cluster managed in the
Vertex API service. For Ray Job API, VPC network is not required
because Ray Cluster connection can be accessed through dashboard
address.
service_account: Service account to be used for running Ray programs on
the cluster.
cluster_name: This value may be up to 63 characters, and valid
characters are `[a-z0-9_-]`. The first character cannot be a number
or hyphen.
worker_node_types: The list of Resources of the worker nodes. The same
Resources object should not appear multiple times in the list.
custom_images: The NodeImages which specifies head node and worker nodes
images. All the workers will share the same image. If each Resource
has a specific custom image, use `Resources.custom_image` for
head/worker_node_type(s). Note that configuring `Resources.custom_image`
will override `custom_images` here. Allowlist only.
enable_metrics_collection: Enable Ray metrics collection for visualization.
enable_logging: Enable exporting Ray logs to Cloud Logging.
psc_interface_config: PSC-I config.
reserved_ip_ranges: A list of names for the reserved IP ranges under
the VPC network that can be used for this cluster. If set, we will
deploy the cluster within the provided IP ranges. Otherwise, the
cluster is deployed to any IP ranges under the provided VPC network.
Example: ["vertex-ai-ip-range"].
labels:
The labels with user-defined metadata to organize Ray cluster.
Label keys and values can be no longer than 64 characters (Unicode
codepoints), can only contain lowercase letters, numeric characters,
underscores and dashes. International characters are allowed.
See https://goo.gl/xmQnxf for more information and examples of labels.
Returns:
The cluster_resource_name of the initiated Ray cluster on Vertex.
Raise:
ValueError: If the cluster is not created successfully.
RuntimeError: If the ray_version is 2.4.
"""
if network is None:
logging.info(
"[Ray on Vertex]: No VPC network configured. It is required for client connection."
)
if ray_version == "2.4":
raise RuntimeError(_V2_4_WARNING_MESSAGE)
if ray_version == "2.9.3":
warnings.warn(_V2_9_WARNING_MESSAGE, DeprecationWarning, stacklevel=1)
local_ray_verion = _validation_utils.get_local_ray_version()
if ray_version != local_ray_verion:
if custom_images is None and head_node_type.custom_image is None:
install_ray_version = "2.33.0"
logging.info(
"[Ray on Vertex]: Local runtime has Ray version %s"
", but the requested cluster runtime has %s. Please "
"ensure that the Ray versions match for client connectivity. You may "
'"pip install --user --force-reinstall ray[default]==%s"'
" and restart runtime before cluster connection."
% (local_ray_verion, ray_version, install_ray_version)
)
else:
logging.info(
"[Ray on Vertex]: Local runtime has Ray version %s."
"Please ensure that the Ray versions match for client connectivity."
% local_ray_verion
)
if cluster_name is None:
cluster_name = "ray-cluster-" + utils.timestamped_unique_name()
if head_node_type:
if head_node_type.node_count != 1:
raise ValueError(
"[Ray on Vertex AI]: For head_node_type, "
+ "Resources.node_count must be 1."
)
if head_node_type.autoscaling_spec is not None:
raise ValueError(
"[Ray on Vertex AI]: For head_node_type, "
+ "Resources.autoscaling_spec must be None."
)
if (
head_node_type.accelerator_type is None
and head_node_type.accelerator_count > 0
):
raise ValueError(
"[Ray on Vertex]: accelerator_type must be specified when"
+ " accelerator_count is set to a value other than 0."
)
resource_pool_images = {}
# head node
resource_pool_0 = ResourcePool()
resource_pool_0.id = "head-node"
resource_pool_0.replica_count = head_node_type.node_count
resource_pool_0.machine_spec.machine_type = head_node_type.machine_type
resource_pool_0.machine_spec.accelerator_count = head_node_type.accelerator_count
resource_pool_0.machine_spec.accelerator_type = head_node_type.accelerator_type
resource_pool_0.disk_spec.boot_disk_type = head_node_type.boot_disk_type
resource_pool_0.disk_spec.boot_disk_size_gb = head_node_type.boot_disk_size_gb
enable_cuda = True if head_node_type.accelerator_count > 0 else False
if head_node_type.custom_image is not None:
image_uri = head_node_type.custom_image
elif custom_images is None:
image_uri = _validation_utils.get_image_uri(
ray_version, python_version, enable_cuda
)
elif custom_images.head is not None and custom_images.worker is not None:
image_uri = custom_images.head
else:
raise ValueError(
"[Ray on Vertex AI]: custom_images.head and custom_images.worker must be specified when custom_images is set."
)
resource_pool_images[resource_pool_0.id] = image_uri
worker_pools = []
i = 0
if worker_node_types:
for worker_node_type in worker_node_types:
if (
worker_node_type.accelerator_type is None
and worker_node_type.accelerator_count > 0
):
raise ValueError(
"[Ray on Vertex]: accelerator_type must be specified when"
+ " accelerator_count is set to a value other than 0."
)
additional_replica_count = resources._check_machine_spec_identical(
head_node_type, worker_node_type
)
if worker_node_type.autoscaling_spec is None:
# Worker and head share the same MachineSpec, merge them into the
# same ResourcePool
resource_pool_0.replica_count = (
resource_pool_0.replica_count + additional_replica_count
)
else:
if additional_replica_count > 0:
# Autoscaling for single ResourcePool (homogeneous cluster).
resource_pool_0.replica_count = None
resource_pool_0.autoscaling_spec.min_replica_count = (
worker_node_type.autoscaling_spec.min_replica_count
)
resource_pool_0.autoscaling_spec.max_replica_count = (
worker_node_type.autoscaling_spec.max_replica_count
)
if additional_replica_count == 0:
resource_pool = ResourcePool()
resource_pool.id = f"worker-pool{i+1}"
if worker_node_type.autoscaling_spec is None:
resource_pool.replica_count = worker_node_type.node_count
else:
# Autoscaling for worker ResourcePool.
resource_pool.autoscaling_spec.min_replica_count = (
worker_node_type.autoscaling_spec.min_replica_count
)
resource_pool.autoscaling_spec.max_replica_count = (
worker_node_type.autoscaling_spec.max_replica_count
)
resource_pool.machine_spec.machine_type = worker_node_type.machine_type
resource_pool.machine_spec.accelerator_count = (
worker_node_type.accelerator_count
)
resource_pool.machine_spec.accelerator_type = (
worker_node_type.accelerator_type
)
resource_pool.disk_spec.boot_disk_type = worker_node_type.boot_disk_type
resource_pool.disk_spec.boot_disk_size_gb = (
worker_node_type.boot_disk_size_gb
)
worker_pools.append(resource_pool)
enable_cuda = True if worker_node_type.accelerator_count > 0 else False
if worker_node_type.custom_image is not None:
image_uri = worker_node_type.custom_image
elif custom_images is None:
image_uri = _validation_utils.get_image_uri(
ray_version, python_version, enable_cuda
)
else:
image_uri = custom_images.worker
resource_pool_images[resource_pool.id] = image_uri
i += 1
resource_pools = [resource_pool_0] + worker_pools
metrics_collection_disabled = not enable_metrics_collection
ray_metric_spec = RayMetricSpec(disabled=metrics_collection_disabled)
logging_disabled = not enable_logging
ray_logs_spec = RayLogsSpec(disabled=logging_disabled)
ray_spec = RaySpec(
resource_pool_images=resource_pool_images,
ray_metric_spec=ray_metric_spec,
ray_logs_spec=ray_logs_spec,
)
if nfs_mounts:
gapic_nfs_mounts = []
for nfs_mount in nfs_mounts:
gapic_nfs_mounts.append(
NfsMount(
server=nfs_mount.server,
path=nfs_mount.path,
mount_point=nfs_mount.mount_point,
)
)
ray_spec.nfs_mounts = gapic_nfs_mounts
if service_account:
service_account_spec = ServiceAccountSpec(
enable_custom_service_account=True,
service_account=service_account,
)
resource_runtime_spec = ResourceRuntimeSpec(
ray_spec=ray_spec,
service_account_spec=service_account_spec,
)
else:
resource_runtime_spec = ResourceRuntimeSpec(ray_spec=ray_spec)
if psc_interface_config:
gapic_psc_interface_config = PscInterfaceConfig(
network_attachment=psc_interface_config.network_attachment,
)
else:
gapic_psc_interface_config = None
persistent_resource = PersistentResource(
resource_pools=resource_pools,
network=network,
labels=labels,
resource_runtime_spec=resource_runtime_spec,
psc_interface_config=gapic_psc_interface_config,
reserved_ip_ranges=reserved_ip_ranges,
)
location = initializer.global_config.location
project_id = initializer.global_config.project
project_number = resource_manager_utils.get_project_number(project_id)
parent = f"projects/{project_number}/locations/{location}"
request = persistent_resource_service.CreatePersistentResourceRequest(
parent=parent,
persistent_resource=persistent_resource,
persistent_resource_id=cluster_name,
)
client = _gapic_utils.create_persistent_resource_client()
try:
_ = client.create_persistent_resource(request)
except Exception as e:
> raise ValueError("Failed in cluster creation due to: ", e) from e
E ValueError: ('Failed in cluster creation due to: ', FailedPrecondition('You have already provisioned the maximum number of PersistentResources in this region. Please switch to a different region or delete one or more PersistentResources in this region before creating another.'))
google/cloud/aiplatform/vertex_ray/cluster_init.py:369: ValueError
____________ TestPersistentResource.test_create_persistent_resource ____________
[gw9] linux -- Python 3.10.15 /tmpfs/src/github/python-aiplatform/.nox/system-3-10/bin/python
args = (parent: "projects/ucaip-sample-tests/locations/us-central1"
persistent_resource {
name: "test-pr-e2e--5ae0e4b4-1358...dard-4"
}
replica_count: 2
}
}
persistent_resource_id: "test-pr-e2e--5ae0e4b4-1358-4fd4-b2ea-2faab2c677b3"
,)
kwargs = {'metadata': [('x-goog-request-params', 'parent=projects/ucaip-sample-tests/locations/us-central1'), ('x-goog-api-clie...thon/3.10.15 grpc/1.51.3 gax/2.21.0 gapic/1.82.0+top_google_constructor_method+google.cloud.aiplatform.base.wrapper')]}
@functools.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
try:
> return callable_(*args, **kwargs)
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/grpc_helpers.py:76:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
.nox/system-3-10/lib/python3.10/site-packages/grpc/_interceptor.py:247: in __call__
response, ignored_call = self._with_call(request,
.nox/system-3-10/lib/python3.10/site-packages/grpc/_interceptor.py:290: in _with_call
return call.result(), call
.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:343: in result
raise self
.nox/system-3-10/lib/python3.10/site-packages/grpc/_interceptor.py:274: in continuation
response, call = self._thunk(new_method).with_call(
.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:957: in with_call
return _end_unary_response_blocking(state, call, True, None)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
state =
call =
with_call = True, deadline = None
def _end_unary_response_blocking(state, call, with_call, deadline):
if state.code is grpc.StatusCode.OK:
if with_call:
rendezvous = _MultiThreadedRendezvous(state, call, None, deadline)
return state.response, rendezvous
else:
return state.response
else:
> raise _InactiveRpcError(state)
E grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
E status = StatusCode.FAILED_PRECONDITION
E details = "You have already provisioned the maximum number of PersistentResources in this region. Please switch to a different region or delete one or more PersistentResources in this region before creating another."
E debug_error_string = "UNKNOWN:Error received from peer ipv4:142.251.188.95:443 {created_time:"2025-02-28T01:53:10.86930611+00:00", grpc_status:9, grpc_message:"You have already provisioned the maximum number of PersistentResources in this region. Please switch to a different region or delete one or more PersistentResources in this region before creating another."}"
E >
.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:849: _InactiveRpcError
The above exception was the direct cause of the following exception:
self =
shared_state = {}
def test_create_persistent_resource(self, shared_state):
# PersistentResource ID must be shorter than 64 characters.
# IE: "test-pr-e2e-ea3ae19d-3d94-4818-8ecd-1a7a63d7418c"
resource_id = self._make_display_name("")
resource_pools = [
gca_persistent_resource.ResourcePool(
machine_spec=gca_machine_resources.MachineSpec(
machine_type=_TEST_MACHINE_TYPE,
),
replica_count=_TEST_INITIAL_REPLICA_COUNT,
)
]
> test_resource = persistent_resource.PersistentResource.create(
persistent_resource_id=resource_id, resource_pools=resource_pools
)
tests/system/aiplatform/test_persistent_resource.py:61:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
google/cloud/aiplatform/base.py:863: in wrapper
return method(*args, **kwargs)
google/cloud/aiplatform/persistent_resource.py:309: in create
create_lro = cls._create(
google/cloud/aiplatform/persistent_resource.py:376: in _create
return api_client.create_persistent_resource(
google/cloud/aiplatform_v1/services/persistent_resource_service/client.py:961: in create_persistent_resource
response = rpc(
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/gapic_v1/method.py:131: in __call__
return wrapped_func(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
args = (parent: "projects/ucaip-sample-tests/locations/us-central1"
persistent_resource {
name: "test-pr-e2e--5ae0e4b4-1358...dard-4"
}
replica_count: 2
}
}
persistent_resource_id: "test-pr-e2e--5ae0e4b4-1358-4fd4-b2ea-2faab2c677b3"
,)
kwargs = {'metadata': [('x-goog-request-params', 'parent=projects/ucaip-sample-tests/locations/us-central1'), ('x-goog-api-clie...thon/3.10.15 grpc/1.51.3 gax/2.21.0 gapic/1.82.0+top_google_constructor_method+google.cloud.aiplatform.base.wrapper')]}
@functools.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
try:
return callable_(*args, **kwargs)
except grpc.RpcError as exc:
> raise exceptions.from_grpc_error(exc) from exc
E google.api_core.exceptions.FailedPrecondition: 400 You have already provisioned the maximum number of PersistentResources in this region. Please switch to a different region or delete one or more PersistentResources in this region before creating another.
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/grpc_helpers.py:78: FailedPrecondition
______ TestModelDeploymentMonitoring.test_mdm_two_models_one_valid_config ______
[gw5] linux -- Python 3.10.15 /tmpfs/src/github/python-aiplatform/.nox/system-3-10/bin/python
args = (parent: "projects/ucaip-sample-tests/locations/us-central1"
model_deployment_monitoring_job {
display_name: "temp_e...ocacyorg.joonix.net"
}
enable_logging: true
}
sample_predict_instance {
null_value: NULL_VALUE
}
}
,)
kwargs = {'metadata': [('x-goog-request-params', 'parent=projects/ucaip-sample-tests/locations/us-central1'), ('x-goog-api-clie...0+top_google_constructor_method+google.cloud.aiplatform.jobs.ModelDeploymentMonitoringJob.create')], 'timeout': 3600.0}
@functools.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
try:
> return callable_(*args, **kwargs)
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/grpc_helpers.py:76:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
.nox/system-3-10/lib/python3.10/site-packages/grpc/_interceptor.py:247: in __call__
response, ignored_call = self._with_call(request,
.nox/system-3-10/lib/python3.10/site-packages/grpc/_interceptor.py:290: in _with_call
return call.result(), call
.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:343: in result
raise self
.nox/system-3-10/lib/python3.10/site-packages/grpc/_interceptor.py:274: in continuation
response, call = self._thunk(new_method).with_call(
.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:957: in with_call
return _end_unary_response_blocking(state, call, True, None)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
state =
call =
with_call = True, deadline = None
def _end_unary_response_blocking(state, call, with_call, deadline):
if state.code is grpc.StatusCode.OK:
if with_call:
rendezvous = _MultiThreadedRendezvous(state, call, None, deadline)
return state.response, rendezvous
else:
return state.response
else:
> raise _InactiveRpcError(state)
E grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
E status = StatusCode.PERMISSION_DENIED
E details = "Vertex AI Service Agent service-580378083368@gcp-sa-aiplatform.iam.gserviceaccount.com does not have the requisite access to BigQuery [bq://mco-mm.bqmlga4.train]. Ensure that the service account has been granted the bigquery.tables.get permission and try again."
E debug_error_string = "UNKNOWN:Error received from peer ipv4:74.125.197.95:443 {created_time:"2025-02-28T01:54:16.985923361+00:00", grpc_status:7, grpc_message:"Vertex AI Service Agent service-580378083368@gcp-sa-aiplatform.iam.gserviceaccount.com does not have the requisite access to BigQuery [bq://mco-mm.bqmlga4.train]. Ensure that the service account has been granted the bigquery.tables.get permission and try again."}"
E >
.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:849: _InactiveRpcError
The above exception was the direct cause of the following exception:
self =
shared_state = {'resources': [
resource name: projects/580378083368/locations/us-central1/endpoints/8528978766867726336]}
def test_mdm_two_models_one_valid_config(self, shared_state):
"""
Enable model monitoring on two existing models deployed to the same endpoint.
"""
assert len(shared_state["resources"]) == 1
self.endpoint = shared_state["resources"][0]
aiplatform.init(project=e2e_base._PROJECT, location=e2e_base._LOCATION)
# test model monitoring configurations
> job = aiplatform.ModelDeploymentMonitoringJob.create(
display_name=self._make_display_name(key=JOB_NAME),
logging_sampling_strategy=sampling_strategy,
schedule_config=schedule_config,
alert_config=email_alert_config,
objective_configs=objective_config,
create_request_timeout=3600,
project=e2e_base._PROJECT,
location=e2e_base._LOCATION,
endpoint=self.endpoint,
)
tests/system/aiplatform/test_model_monitoring.py:157:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
google/cloud/aiplatform/jobs.py:3479: in create
self._gca_resource = self.api_client.create_model_deployment_monitoring_job(
google/cloud/aiplatform_v1/services/job_service/client.py:4469: in create_model_deployment_monitoring_job
response = rpc(
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/gapic_v1/method.py:131: in __call__
return wrapped_func(*args, **kwargs)
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/timeout.py:120: in func_with_timeout
return func(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
args = (parent: "projects/ucaip-sample-tests/locations/us-central1"
model_deployment_monitoring_job {
display_name: "temp_e...ocacyorg.joonix.net"
}
enable_logging: true
}
sample_predict_instance {
null_value: NULL_VALUE
}
}
,)
kwargs = {'metadata': [('x-goog-request-params', 'parent=projects/ucaip-sample-tests/locations/us-central1'), ('x-goog-api-clie...0+top_google_constructor_method+google.cloud.aiplatform.jobs.ModelDeploymentMonitoringJob.create')], 'timeout': 3600.0}
@functools.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
try:
return callable_(*args, **kwargs)
except grpc.RpcError as exc:
> raise exceptions.from_grpc_error(exc) from exc
E google.api_core.exceptions.PermissionDenied: 403 Vertex AI Service Agent service-580378083368@gcp-sa-aiplatform.iam.gserviceaccount.com does not have the requisite access to BigQuery [bq://mco-mm.bqmlga4.train]. Ensure that the service account has been granted the bigquery.tables.get permission and try again.
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/grpc_helpers.py:78: PermissionDenied
------------------------------ Captured log call -------------------------------
INFO google.cloud.aiplatform.jobs:base.py:85 Creating ModelDeploymentMonitoringJob
_____ TestModelDeploymentMonitoring.test_mdm_two_models_two_valid_configs ______
[gw5] linux -- Python 3.10.15 /tmpfs/src/github/python-aiplatform/.nox/system-3-10/bin/python
args = (parent: "projects/ucaip-sample-tests/locations/us-central1"
model_deployment_monitoring_job {
display_name: "temp_e...ocacyorg.joonix.net"
}
enable_logging: true
}
sample_predict_instance {
null_value: NULL_VALUE
}
}
,)
kwargs = {'metadata': [('x-goog-request-params', 'parent=projects/ucaip-sample-tests/locations/us-central1'), ('x-goog-api-clie...0+top_google_constructor_method+google.cloud.aiplatform.jobs.ModelDeploymentMonitoringJob.create')], 'timeout': 3600.0}
@functools.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
try:
> return callable_(*args, **kwargs)
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/grpc_helpers.py:76:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
.nox/system-3-10/lib/python3.10/site-packages/grpc/_interceptor.py:247: in __call__
response, ignored_call = self._with_call(request,
.nox/system-3-10/lib/python3.10/site-packages/grpc/_interceptor.py:290: in _with_call
return call.result(), call
.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:343: in result
raise self
.nox/system-3-10/lib/python3.10/site-packages/grpc/_interceptor.py:274: in continuation
response, call = self._thunk(new_method).with_call(
.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:957: in with_call
return _end_unary_response_blocking(state, call, True, None)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
state =
call =
with_call = True, deadline = None
def _end_unary_response_blocking(state, call, with_call, deadline):
if state.code is grpc.StatusCode.OK:
if with_call:
rendezvous = _MultiThreadedRendezvous(state, call, None, deadline)
return state.response, rendezvous
else:
return state.response
else:
> raise _InactiveRpcError(state)
E grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
E status = StatusCode.PERMISSION_DENIED
E details = "Vertex AI Service Agent service-580378083368@gcp-sa-aiplatform.iam.gserviceaccount.com does not have the requisite access to BigQuery [bq://mco-mm.bqmlga4.train]. Ensure that the service account has been granted the bigquery.tables.get permission and try again."
E debug_error_string = "UNKNOWN:Error received from peer ipv4:74.125.197.95:443 {created_time:"2025-02-28T01:54:19.205617897+00:00", grpc_status:7, grpc_message:"Vertex AI Service Agent service-580378083368@gcp-sa-aiplatform.iam.gserviceaccount.com does not have the requisite access to BigQuery [bq://mco-mm.bqmlga4.train]. Ensure that the service account has been granted the bigquery.tables.get permission and try again."}"
E >
.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:849: _InactiveRpcError
The above exception was the direct cause of the following exception:
self =
shared_state = {'resources': [
resource name: projects/580378083368/locations/us-central1/endpoints/8528978766867726336]}
def test_mdm_two_models_two_valid_configs(self, shared_state):
assert len(shared_state["resources"]) == 1
self.endpoint = shared_state["resources"][0]
aiplatform.init(project=e2e_base._PROJECT, location=e2e_base._LOCATION)
[deployed_model1, deployed_model2] = list(
map(lambda x: x.id, self.endpoint.list_models())
)
all_configs = {
deployed_model1: objective_config,
deployed_model2: objective_config2,
}
> job = aiplatform.ModelDeploymentMonitoringJob.create(
display_name=self._make_display_name(key=JOB_NAME),
logging_sampling_strategy=sampling_strategy,
schedule_config=schedule_config,
alert_config=email_alert_config,
objective_configs=all_configs,
create_request_timeout=3600,
project=e2e_base._PROJECT,
location=e2e_base._LOCATION,
endpoint=self.endpoint,
)
tests/system/aiplatform/test_model_monitoring.py:292:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
google/cloud/aiplatform/jobs.py:3479: in create
self._gca_resource = self.api_client.create_model_deployment_monitoring_job(
google/cloud/aiplatform_v1/services/job_service/client.py:4469: in create_model_deployment_monitoring_job
response = rpc(
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/gapic_v1/method.py:131: in __call__
return wrapped_func(*args, **kwargs)
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/timeout.py:120: in func_with_timeout
return func(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
args = (parent: "projects/ucaip-sample-tests/locations/us-central1"
model_deployment_monitoring_job {
display_name: "temp_e...ocacyorg.joonix.net"
}
enable_logging: true
}
sample_predict_instance {
null_value: NULL_VALUE
}
}
,)
kwargs = {'metadata': [('x-goog-request-params', 'parent=projects/ucaip-sample-tests/locations/us-central1'), ('x-goog-api-clie...0+top_google_constructor_method+google.cloud.aiplatform.jobs.ModelDeploymentMonitoringJob.create')], 'timeout': 3600.0}
@functools.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
try:
return callable_(*args, **kwargs)
except grpc.RpcError as exc:
> raise exceptions.from_grpc_error(exc) from exc
E google.api_core.exceptions.PermissionDenied: 403 Vertex AI Service Agent service-580378083368@gcp-sa-aiplatform.iam.gserviceaccount.com does not have the requisite access to BigQuery [bq://mco-mm.bqmlga4.train]. Ensure that the service account has been granted the bigquery.tables.get permission and try again.
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/grpc_helpers.py:78: PermissionDenied
------------------------------ Captured log call -------------------------------
INFO google.cloud.aiplatform.jobs:base.py:85 Creating ModelDeploymentMonitoringJob
___ TestModelDeploymentMonitoring.test_mdm_notification_channel_alert_config ___
[gw5] linux -- Python 3.10.15 /tmpfs/src/github/python-aiplatform/.nox/system-3-10/bin/python
args = (parent: "projects/ucaip-sample-tests/locations/us-central1"
model_deployment_monitoring_job {
display_name: "temp_e...le-tests/notificationChannels/11578134490450491958"
}
sample_predict_instance {
null_value: NULL_VALUE
}
}
,)
kwargs = {'metadata': [('x-goog-request-params', 'parent=projects/ucaip-sample-tests/locations/us-central1'), ('x-goog-api-clie...0+top_google_constructor_method+google.cloud.aiplatform.jobs.ModelDeploymentMonitoringJob.create')], 'timeout': 3600.0}
@functools.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
try:
> return callable_(*args, **kwargs)
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/grpc_helpers.py:76:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
.nox/system-3-10/lib/python3.10/site-packages/grpc/_interceptor.py:247: in __call__
response, ignored_call = self._with_call(request,
.nox/system-3-10/lib/python3.10/site-packages/grpc/_interceptor.py:290: in _with_call
return call.result(), call
.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:343: in result
raise self
.nox/system-3-10/lib/python3.10/site-packages/grpc/_interceptor.py:274: in continuation
response, call = self._thunk(new_method).with_call(
.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:957: in with_call
return _end_unary_response_blocking(state, call, True, None)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
state =
call =
with_call = True, deadline = None
def _end_unary_response_blocking(state, call, with_call, deadline):
if state.code is grpc.StatusCode.OK:
if with_call:
rendezvous = _MultiThreadedRendezvous(state, call, None, deadline)
return state.response, rendezvous
else:
return state.response
else:
> raise _InactiveRpcError(state)
E grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
E status = StatusCode.PERMISSION_DENIED
E details = "Vertex AI Service Agent service-580378083368@gcp-sa-aiplatform.iam.gserviceaccount.com does not have the requisite access to BigQuery [bq://mco-mm.bqmlga4.train]. Ensure that the service account has been granted the bigquery.tables.get permission and try again."
E debug_error_string = "UNKNOWN:Error received from peer ipv4:74.125.197.95:443 {grpc_message:"Vertex AI Service Agent service-580378083368@gcp-sa-aiplatform.iam.gserviceaccount.com does not have the requisite access to BigQuery [bq://mco-mm.bqmlga4.train]. Ensure that the service account has been granted the bigquery.tables.get permission and try again.", grpc_status:7, created_time:"2025-02-28T01:54:21.798379881+00:00"}"
E >
.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:849: _InactiveRpcError
The above exception was the direct cause of the following exception:
self =
shared_state = {'resources': [
resource name: projects/580378083368/locations/us-central1/endpoints/8528978766867726336]}
def test_mdm_notification_channel_alert_config(self, shared_state):
self.endpoint = shared_state["resources"][0]
aiplatform.init(project=e2e_base._PROJECT, location=e2e_base._LOCATION)
# Reset objective_config.explanation_config
objective_config.explanation_config = None
# test model monitoring configurations
> job = aiplatform.ModelDeploymentMonitoringJob.create(
display_name=self._make_display_name(key=JOB_NAME),
logging_sampling_strategy=sampling_strategy,
schedule_config=schedule_config,
alert_config=alert_config,
objective_configs=objective_config,
create_request_timeout=3600,
project=e2e_base._PROJECT,
location=e2e_base._LOCATION,
endpoint=self.endpoint,
)
tests/system/aiplatform/test_model_monitoring.py:418:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
google/cloud/aiplatform/jobs.py:3479: in create
self._gca_resource = self.api_client.create_model_deployment_monitoring_job(
google/cloud/aiplatform_v1/services/job_service/client.py:4469: in create_model_deployment_monitoring_job
response = rpc(
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/gapic_v1/method.py:131: in __call__
return wrapped_func(*args, **kwargs)
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/timeout.py:120: in func_with_timeout
return func(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
args = (parent: "projects/ucaip-sample-tests/locations/us-central1"
model_deployment_monitoring_job {
display_name: "temp_e...le-tests/notificationChannels/11578134490450491958"
}
sample_predict_instance {
null_value: NULL_VALUE
}
}
,)
kwargs = {'metadata': [('x-goog-request-params', 'parent=projects/ucaip-sample-tests/locations/us-central1'), ('x-goog-api-clie...0+top_google_constructor_method+google.cloud.aiplatform.jobs.ModelDeploymentMonitoringJob.create')], 'timeout': 3600.0}
@functools.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
try:
return callable_(*args, **kwargs)
except grpc.RpcError as exc:
> raise exceptions.from_grpc_error(exc) from exc
E google.api_core.exceptions.PermissionDenied: 403 Vertex AI Service Agent service-580378083368@gcp-sa-aiplatform.iam.gserviceaccount.com does not have the requisite access to BigQuery [bq://mco-mm.bqmlga4.train]. Ensure that the service account has been granted the bigquery.tables.get permission and try again.
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/grpc_helpers.py:78: PermissionDenied
------------------------------ Captured log call -------------------------------
INFO google.cloud.aiplatform.jobs:base.py:85 Creating ModelDeploymentMonitoringJob
---------------------------- Captured log teardown -----------------------------
INFO google.cloud.aiplatform.models:base.py:189 Undeploying Endpoint model: projects/580378083368/locations/us-central1/endpoints/8528978766867726336
INFO google.cloud.aiplatform.models:base.py:209 Undeploy Endpoint model backing LRO: projects/580378083368/locations/us-central1/endpoints/8528978766867726336/operations/7844716500098220032
INFO google.cloud.aiplatform.models:base.py:222 Endpoint model undeployed. Resource name: projects/580378083368/locations/us-central1/endpoints/8528978766867726336
INFO google.cloud.aiplatform.models:base.py:189 Undeploying Endpoint model: projects/580378083368/locations/us-central1/endpoints/8528978766867726336
INFO google.cloud.aiplatform.models:base.py:209 Undeploy Endpoint model backing LRO: projects/580378083368/locations/us-central1/endpoints/8528978766867726336/operations/6921478576487268352
INFO google.cloud.aiplatform.models:base.py:222 Endpoint model undeployed. Resource name: projects/580378083368/locations/us-central1/endpoints/8528978766867726336
INFO google.cloud.aiplatform.base:base.py:189 Deleting Endpoint : projects/580378083368/locations/us-central1/endpoints/8528978766867726336
INFO google.cloud.aiplatform.base:base.py:222 Endpoint deleted. . Resource name: projects/580378083368/locations/us-central1/endpoints/8528978766867726336
INFO google.cloud.aiplatform.base:base.py:156 Deleting Endpoint resource: projects/580378083368/locations/us-central1/endpoints/8528978766867726336
INFO google.cloud.aiplatform.base:base.py:161 Delete Endpoint backing LRO: projects/580378083368/locations/us-central1/operations/1654518812277473280
INFO google.cloud.aiplatform.base:base.py:174 Endpoint resource projects/580378083368/locations/us-central1/endpoints/8528978766867726336 deleted.
=============================== warnings summary ===============================
.nox/system-3-10/lib/python3.10/site-packages/google/cloud/storage/_http.py:19: 16 warnings
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/google/cloud/storage/_http.py:19: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html
import pkg_resources
.nox/system-3-10/lib/python3.10/site-packages/pkg_resources/__init__.py:2832: 32 warnings
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pkg_resources/__init__.py:2832: DeprecationWarning: Deprecated call to `pkg_resources.declare_namespace('google')`.
Implementing implicit namespace packages (as specified in PEP 420) is preferred to `pkg_resources.declare_namespace`. See https://setuptools.pypa.io/en/latest/references/keywords.html#keyword-namespace-packages
declare_namespace(pkg)
.nox/system-3-10/lib/python3.10/site-packages/pkg_resources/__init__.py:2832: 32 warnings
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pkg_resources/__init__.py:2832: DeprecationWarning: Deprecated call to `pkg_resources.declare_namespace('google.cloud')`.
Implementing implicit namespace packages (as specified in PEP 420) is preferred to `pkg_resources.declare_namespace`. See https://setuptools.pypa.io/en/latest/references/keywords.html#keyword-namespace-packages
declare_namespace(pkg)
.nox/system-3-10/lib/python3.10/site-packages/pkg_resources/__init__.py:2317: 16 warnings
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pkg_resources/__init__.py:2317: DeprecationWarning: Deprecated call to `pkg_resources.declare_namespace('google')`.
Implementing implicit namespace packages (as specified in PEP 420) is preferred to `pkg_resources.declare_namespace`. See https://setuptools.pypa.io/en/latest/references/keywords.html#keyword-namespace-packages
declare_namespace(parent)
tests/system/aiplatform/test_experiments.py: 38 warnings
tests/system/aiplatform/test_autologging.py: 5 warnings
tests/system/aiplatform/test_custom_job.py: 2 warnings
tests/system/aiplatform/test_model_evaluation.py: 2 warnings
/tmpfs/src/github/python-aiplatform/google/cloud/aiplatform/utils/_ipython_utils.py:149: DeprecationWarning: Importing display from IPython.core.display is deprecated since IPython 7.14, please import from IPython display
from IPython.core.display import display
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_text_generation_model_predict_async
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_async[grpc-PROD_ENDPOINT]
tests/system/aiplatform/test_model_interactions.py::TestModelInteractions::test_endpoint_predict_async
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pytest_asyncio/plugin.py:867: DeprecationWarning: The event_loop fixture provided by pytest-asyncio has been redefined in
/tmpfs/src/github/python-aiplatform/tests/system/aiplatform/e2e_base.py:212
Replacing the event_loop fixture with a custom implementation is deprecated
and will lead to errors in the future.
If you want to request an asyncio event loop with a scope other than function
scope, use the "loop_scope" argument to the asyncio mark when marking the tests.
If you want to return different types of event loops, use the event_loop_policy
fixture.
warnings.warn(
tests/system/aiplatform/test_experiment_model.py::TestExperimentModel::test_xgboost_booster_with_custom_uri
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/xgboost/core.py:158: UserWarning: [23:35:45] WARNING: /workspace/src/c_api/c_api.cc:1374: Saving model in the UBJSON format as default. You can use file extension: `json`, `ubj` or `deprecated` to choose between formats.
warnings.warn(smsg, UserWarning)
tests/system/aiplatform/test_experiment_model.py::TestExperimentModel::test_xgboost_xgbmodel_with_custom_names
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/xgboost/core.py:158: UserWarning: [23:35:47] WARNING: /workspace/src/c_api/c_api.cc:1374: Saving model in the UBJSON format as default. You can use file extension: `json`, `ubj` or `deprecated` to choose between formats.
warnings.warn(smsg, UserWarning)
tests/system/aiplatform/test_pipeline_job.py::TestPipelineJob::test_add_pipeline_job_to_experiment
tests/system/aiplatform/test_experiments.py::TestExperiments::test_add_pipeline_job_to_experiment
tests/system/aiplatform/test_pipeline_job_schedule.py::TestPipelineJobSchedule::test_create_get_pause_resume_update_list
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/kfp/dsl/component_decorator.py:126: FutureWarning: The default base_image used by the @dsl.component decorator will switch from 'python:3.9' to 'python:3.10' on Oct 1, 2025. To ensure your existing components work with versions of the KFP SDK released after that date, you should provide an explicit base_image argument and ensure your component works as intended on Python 3.10.
return component_factory.create_component_from_func(
tests/system/aiplatform/test_pipeline_job.py::TestPipelineJob::test_add_pipeline_job_to_experiment
tests/system/aiplatform/test_pipeline_job_schedule.py::TestPipelineJobSchedule::test_create_get_pause_resume_update_list
/tmpfs/src/github/python-aiplatform/google/cloud/aiplatform/pipeline_jobs.py:902: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
_LOGGER.warn(
tests/system/aiplatform/test_experiments.py::TestExperiments::test_add_pipeline_job_to_experiment
/tmpfs/src/github/python-aiplatform/tests/system/aiplatform/test_experiments.py:376: DeprecationWarning: The module `kfp.v2` is deprecated and will be removed in a futureversion. Please import directly from the `kfp` namespace, instead of `kfp.v2`.
import kfp.v2.dsl as dsl
tests/system/aiplatform/test_experiments.py::TestExperiments::test_add_pipeline_job_to_experiment
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/kfp/compiler/compiler.py:81: DeprecationWarning: Compiling to JSON is deprecated and will be removed in a future version. Please compile to a YAML file by providing a file path with a .yaml extension instead.
builder.write_pipeline_spec_to_file(
tests/system/aiplatform/test_prediction_cpr.py::TestPredictionCpr::test_build_cpr_model_upload_and_deploy
tests/system/aiplatform/test_prediction_cpr.py::TestPredictionCpr::test_build_cpr_model_upload_and_deploy
/usr/local/lib/python3.10/subprocess.py:955: RuntimeWarning: line buffering (buffering=1) isn't supported in binary mode, the default buffer size will be used
self.stdin = io.open(p2cwrite, 'wb', bufsize)
tests/system/aiplatform/test_prediction_cpr.py::TestPredictionCpr::test_build_cpr_model_upload_and_deploy
tests/system/aiplatform/test_prediction_cpr.py::TestPredictionCpr::test_build_cpr_model_upload_and_deploy
/usr/local/lib/python3.10/subprocess.py:961: RuntimeWarning: line buffering (buffering=1) isn't supported in binary mode, the default buffer size will be used
self.stdout = io.open(c2pread, 'rb', bufsize)
tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_batch_serve_to_df
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pyarrow/pandas_compat.py:735: DeprecationWarning: DatetimeTZBlock is deprecated and will be removed in a future version. Use public APIs instead.
klass=_int.DatetimeTZBlock,
tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_batch_serve_to_df
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pandas/core/frame.py:717: DeprecationWarning: Passing a BlockManager to DataFrame is deprecated and will raise in a future version. Use public APIs instead.
warnings.warn(
tests/system/aiplatform/test_e2e_tabular.py::TestEndToEndTabular::test_end_to_end_tabular
/tmpfs/src/github/python-aiplatform/tests/system/aiplatform/test_e2e_tabular.py:203: PendingDeprecationWarning: Blob.download_as_string() is deprecated and will be removed in future. Use Blob.download_as_bytes() instead.
error_output_filestr = blob.download_as_string().decode()
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
- generated xml file: /tmpfs/src/github/python-aiplatform/system_3.10_sponge_log.xml -
=========================== short test summary info ============================
FAILED tests/system/aiplatform/test_experiment_model.py::TestExperimentModel::test_xgboost_booster_with_custom_uri
FAILED tests/system/aiplatform/test_experiment_model.py::TestExperimentModel::test_xgboost_xgbmodel_with_custom_names
FAILED tests/system/aiplatform/test_autologging.py::TestAutologging::test_autologging_with_autorun_creation
FAILED tests/system/aiplatform/test_experiment_model.py::TestExperimentModel::test_tensorflow_keras_model_with_input_example
FAILED tests/system/aiplatform/test_experiment_model.py::TestExperimentModel::test_tensorflow_module_with_gpu_container
FAILED tests/system/aiplatform/test_prediction_cpr.py::TestPredictionCpr::test_build_cpr_model_upload_and_deploy
FAILED tests/system/aiplatform/test_experiment_model.py::TestExperimentModel::test_deploy_model_with_gpu_container
FAILED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_function_calling[grpc-PROD_ENDPOINT]
FAILED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_function_calling[rest-PROD_ENDPOINT]
FAILED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_content_is_function_response[gemini-1.0-pro-PROD_ENDPOINT]
FAILED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_content_is_function_response[gemini-1.5-pro-PROD_ENDPOINT]
FAILED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_content_is_function_response[gemini-1.5-flash-PROD_ENDPOINT]
FAILED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_content_is_function_response[gemini-1.5-flash-002-PROD_ENDPOINT]
FAILED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_content_is_function_response[gemini-1.5-pro-002-PROD_ENDPOINT]
FAILED tests/system/vertex_ray/test_ray_data.py::TestRayData::test_ray_data[2.9]
FAILED tests/system/vertex_ray/test_ray_data.py::TestRayData::test_ray_data[2.33]
FAILED tests/system/vertex_ray/test_cluster_management.py::TestClusterManagement::test_cluster_management[2.33]
FAILED tests/system/vertex_ray/test_job_submission_dashboard.py::TestJobSubmissionDashboard::test_job_submission_dashboard[2.9]
FAILED tests/system/vertex_ray/test_job_submission_dashboard.py::TestJobSubmissionDashboard::test_job_submission_dashboard[2.33]
FAILED tests/system/aiplatform/test_persistent_resource.py::TestPersistentResource::test_create_persistent_resource
FAILED tests/system/aiplatform/test_model_monitoring.py::TestModelDeploymentMonitoring::test_mdm_two_models_one_valid_config
FAILED tests/system/aiplatform/test_model_monitoring.py::TestModelDeploymentMonitoring::test_mdm_two_models_two_valid_configs
FAILED tests/system/aiplatform/test_model_monitoring.py::TestModelDeploymentMonitoring::test_mdm_notification_channel_alert_config
===== 23 failed, 219 passed, 6 skipped, 162 warnings in 8707.72s (2:25:07) =====
nox > Command py.test -v --junitxml=system_3.10_sponge_log.xml tests/system failed with exit code 1
nox > Session system-3.10 failed.
[FlakyBot] Sending logs to Flaky Bot...
[FlakyBot] See https://github.com/googleapis/repo-automation-bots/tree/main/packages/flakybot.
[FlakyBot] Published system_3.10_sponge_log.xml (14116925504925142)!
[FlakyBot] Done!
cleanup
[ID: 4692988] Command finished after 9038 secs, exit value: 1
Warning: Permanently added 'localhost' (ED25519) to the list of known hosts.
[17:59:05 PST] Collecting build artifacts from build VM
Build script failed with exit code: 1
Warning: Permanently added 'localhost' (ED25519) to the list of known hosts.
Warning: Permanently added 'localhost' (ED25519) to the list of known hosts.
[15:28:07 PST] Transferring environment variable script to build VM
[15:28:09 PST] Transferring kokoro_log_reader.py to build VM
[15:28:10 PST] Transferring source code to build VM
[15:28:26 PST] Executing build script on build VM
[ID: 4692988] Executing command via SSH:
export KOKORO_BUILD_NUMBER="3360"
export KOKORO_JOB_NAME="cloud-devrel/client-libraries/python/googleapis/python-aiplatform/continuous/system"
source /tmpfs/kokoro-env_vars.sh; cd /tmpfs/src/ ; chmod 755 github/python-aiplatform/.kokoro/trampoline.sh ; PYTHON_3_VERSION="$(pyenv which python3 2> /dev/null || which python3)" ; PYTHON_2_VERSION="$(pyenv which python2 2> /dev/null || which python2)" ; if "$PYTHON_3_VERSION" -c "import psutil" ; then KOKORO_PYTHON_COMMAND="$PYTHON_3_VERSION" ; else KOKORO_PYTHON_COMMAND="$PYTHON_2_VERSION" ; fi > /dev/null 2>&1 ; echo "export KOKORO_PYTHON_COMMAND="$KOKORO_PYTHON_COMMAND"" > "$HOME/.kokoro_python_vars" ; nohup bash -c "( rm -f /tmpfs/kokoro_build_exit_code ; github/python-aiplatform/.kokoro/trampoline.sh ; echo \${PIPESTATUS[0]} > /tmpfs/kokoro_build_exit_code ) > /tmpfs/kokoro_build.log 2>&1" > /dev/null 2>&1 & echo $! > /tmpfs/kokoro_build.pid ; source "$HOME/.kokoro_python_vars" ; "$KOKORO_PYTHON_COMMAND" /tmpfs/kokoro_log_reader.py /tmpfs/kokoro_build.log /tmpfs/kokoro_build_exit_code /tmpfs/kokoro_build.pid /tmpfs/kokoro_log_reader.pid --start_byte 0
2025-02-27 15:28:27 Creating folder on disk for secrets: /tmpfs/src/gfile/secret_manager
Activated service account credentials for: [kokoro-trampoline@cloud-devrel-kokoro-resources.iam.gserviceaccount.com]
WARNING: Your config file at [/home/kbuilder/.docker/config.json] contains these credential helper entries:
{
"credHelpers": {
"gcr.io": "gcr",
"us.gcr.io": "gcr",
"asia.gcr.io": "gcr",
"staging-k8s.gcr.io": "gcr",
"eu.gcr.io": "gcr"
}
}
These will be overwritten.
Docker configuration file updated.
Using default tag: latest
latest: Pulling from cloud-devrel-kokoro-resources/python-multi
5a7813e071bf: Pulling fs layer
5bab59fe6e67: Pulling fs layer
02a6f2ac2d4c: Pulling fs layer
6af90c9d3625: Pulling fs layer
049f2eb8c8be: Pulling fs layer
65d8210370a5: Pulling fs layer
0352f077f465: Pulling fs layer
150f5cbc6aa2: Pulling fs layer
338f84460d46: Pulling fs layer
00b854a9b79a: Pulling fs layer
1ad98d3c56ed: Pulling fs layer
1eb7585bc5a4: Pulling fs layer
46438281e057: Pulling fs layer
7a2b780b0b45: Pulling fs layer
6a4fe6a4959b: Pulling fs layer
1e67f36f119a: Pulling fs layer
563fc835147d: Pulling fs layer
169e8f3c3896: Pulling fs layer
8d4b9fa9f27f: Pulling fs layer
7378e45ffd32: Pulling fs layer
592f8b92015d: Pulling fs layer
946b68838e6d: Pulling fs layer
1c0e8af9a05b: Pulling fs layer
9287c82964ca: Pulling fs layer
e2d997586dc0: Pulling fs layer
65d8210370a5: Waiting
0352f077f465: Waiting
150f5cbc6aa2: Waiting
338f84460d46: Waiting
169e8f3c3896: Waiting
00b854a9b79a: Waiting
8d4b9fa9f27f: Waiting
7378e45ffd32: Waiting
592f8b92015d: Waiting
946b68838e6d: Waiting
1c0e8af9a05b: Waiting
9287c82964ca: Waiting
e2d997586dc0: Waiting
6af90c9d3625: Waiting
1ad98d3c56ed: Waiting
049f2eb8c8be: Waiting
1eb7585bc5a4: Waiting
46438281e057: Waiting
7a2b780b0b45: Waiting
6a4fe6a4959b: Waiting
1e67f36f119a: Waiting
563fc835147d: Waiting
02a6f2ac2d4c: Download complete
6af90c9d3625: Verifying Checksum
6af90c9d3625: Download complete
5a7813e071bf: Verifying Checksum
5a7813e071bf: Download complete
049f2eb8c8be: Verifying Checksum
049f2eb8c8be: Download complete
65d8210370a5: Verifying Checksum
65d8210370a5: Download complete
150f5cbc6aa2: Download complete
0352f077f465: Verifying Checksum
0352f077f465: Download complete
5a7813e071bf: Pull complete
00b854a9b79a: Verifying Checksum
00b854a9b79a: Download complete
5bab59fe6e67: Verifying Checksum
5bab59fe6e67: Download complete
1ad98d3c56ed: Verifying Checksum
46438281e057: Verifying Checksum
46438281e057: Download complete
1eb7585bc5a4: Verifying Checksum
1eb7585bc5a4: Download complete
6a4fe6a4959b: Verifying Checksum
6a4fe6a4959b: Download complete
7a2b780b0b45: Verifying Checksum
7a2b780b0b45: Download complete
563fc835147d: Verifying Checksum
563fc835147d: Download complete
1e67f36f119a: Verifying Checksum
1e67f36f119a: Download complete
169e8f3c3896: Download complete
7378e45ffd32: Verifying Checksum
7378e45ffd32: Download complete
8d4b9fa9f27f: Verifying Checksum
8d4b9fa9f27f: Download complete
592f8b92015d: Verifying Checksum
592f8b92015d: Download complete
946b68838e6d: Verifying Checksum
946b68838e6d: Download complete
9287c82964ca: Verifying Checksum
9287c82964ca: Download complete
1c0e8af9a05b: Verifying Checksum
1c0e8af9a05b: Download complete
e2d997586dc0: Verifying Checksum
e2d997586dc0: Download complete
338f84460d46: Download complete
5bab59fe6e67: Pull complete
02a6f2ac2d4c: Pull complete
6af90c9d3625: Pull complete
049f2eb8c8be: Pull complete
65d8210370a5: Pull complete
0352f077f465: Pull complete
150f5cbc6aa2: Pull complete
338f84460d46: Pull complete
00b854a9b79a: Pull complete
1ad98d3c56ed: Pull complete
1eb7585bc5a4: Pull complete
46438281e057: Pull complete
7a2b780b0b45: Pull complete
6a4fe6a4959b: Pull complete
1e67f36f119a: Pull complete
563fc835147d: Pull complete
169e8f3c3896: Pull complete
8d4b9fa9f27f: Pull complete
7378e45ffd32: Pull complete
592f8b92015d: Pull complete
946b68838e6d: Pull complete
1c0e8af9a05b: Pull complete
9287c82964ca: Pull complete
e2d997586dc0: Pull complete
Digest: sha256:647803a30a8b5edb405c939a25bf41644d72614a1360fd670746a62b73841c4e
Status: Downloaded newer image for gcr.io/cloud-devrel-kokoro-resources/python-multi:latest
gcr.io/cloud-devrel-kokoro-resources/python-multi:latest
Executing: docker run --rm --interactive --network=host --privileged --volume=/var/run/docker.sock:/var/run/docker.sock --workdir=/tmpfs/src --entrypoint=github/python-aiplatform/.kokoro/build.sh --env-file=/tmpfs/tmp/tmpyv0o28xe/envfile --volume=/tmpfs:/tmpfs gcr.io/cloud-devrel-kokoro-resources/python-multi
KOKORO_KEYSTORE_DIR=/tmpfs/src/keystore
KOKORO_GITHUB_COMMIT_URL=https://github.com/googleapis/python-aiplatform/commit/4c8c277066d6f56f49b99e769344c09356e87c3d
KOKORO_JOB_NAME=cloud-devrel/client-libraries/python/googleapis/python-aiplatform/continuous/system
KOKORO_JOB_CLUSTER=GCP_UBUNTU
KOKORO_GIT_COMMIT=4c8c277066d6f56f49b99e769344c09356e87c3d
KOKORO_BLAZE_DIR=/tmpfs/src/objfs
KOKORO_ROOT=/tmpfs
KOKORO_JOB_TYPE=CONTINUOUS_INTEGRATION
KOKORO_ROOT_DIR=/tmpfs/
KOKORO_BUILD_NUMBER=3360
KOKORO_JOB_POOL=yoshi-ubuntu
KOKORO_GITHUB_COMMIT=4c8c277066d6f56f49b99e769344c09356e87c3d
KOKORO_BUILD_INITIATOR=kokoro-github-subscriber
KOKORO_ARTIFACTS_DIR=/tmpfs/src
KOKORO_BUILD_ID=b7bc8b3a-cce1-4877-8d7e-947f38bb71cf
KOKORO_GFILE_DIR=/tmpfs/src/gfile
KOKORO_BUILD_CONFIG_DIR=
KOKORO_POSIX_ROOT=/tmpfs
KOKORO_BUILD_ARTIFACTS_SUBDIR=prod/cloud-devrel/client-libraries/python/googleapis/python-aiplatform/continuous/system/3360/20250227-152721
WARNING: Skipping nox-automation as it is not installed.
[notice] A new release of pip is available: 23.0.1 -> 25.0.1
[notice] To update, run: pip install --upgrade pip
2025.2.9
nox > Running session system-3.10
nox > Creating virtual environment (virtualenv) using python3.10 in .nox/system-3-10
nox > python -m pip install --pre 'grpcio!=1.52.0rc1'
nox > python -m pip install mock pytest google-cloud-testutils -c /tmpfs/src/github/python-aiplatform/testing/constraints-3.10.txt
nox > python -m pip install -e '.[testing]' -c /tmpfs/src/github/python-aiplatform/testing/constraints-3.10.txt
nox > py.test -v --junitxml=system_3.10_sponge_log.xml tests/system
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pytest_asyncio/plugin.py:207: PytestDeprecationWarning: The configuration option "asyncio_default_fixture_loop_scope" is unset.
The event loop scope for asynchronous fixtures will default to the fixture caching scope. Future versions of pytest-asyncio will default the loop scope for asynchronous fixtures to function scope. Set the default fixture loop scope explicitly in order to avoid unexpected behavior in the future. Valid fixture loop scopes are: "function", "class", "module", "package", "session"
warnings.warn(PytestDeprecationWarning(_DEFAULT_FIXTURE_LOOP_SCOPE_UNSET))
============================= test session starts ==============================
platform linux -- Python 3.10.15, pytest-8.3.4, pluggy-1.5.0 -- /tmpfs/src/github/python-aiplatform/.nox/system-3-10/bin/python
cachedir: .pytest_cache
rootdir: /tmpfs/src/github/python-aiplatform
plugins: xdist-3.3.1, anyio-3.7.1, asyncio-0.25.3
asyncio: mode=strict, asyncio_default_fixture_loop_scope=None
created: 16/16 workers
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pytest_asyncio/plugin.py:207: PytestDeprecationWarning: The configuration option "asyncio_default_fixture_loop_scope" is unset.
The event loop scope for asynchronous fixtures will default to the fixture caching scope. Future versions of pytest-asyncio will default the loop scope for asynchronous fixtures to function scope. Set the default fixture loop scope explicitly in order to avoid unexpected behavior in the future. Valid fixture loop scopes are: "function", "class", "module", "package", "session"
warnings.warn(PytestDeprecationWarning(_DEFAULT_FIXTURE_LOOP_SCOPE_UNSET))
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pytest_asyncio/plugin.py:207: PytestDeprecationWarning: The configuration option "asyncio_default_fixture_loop_scope" is unset.
The event loop scope for asynchronous fixtures will default to the fixture caching scope. Future versions of pytest-asyncio will default the loop scope for asynchronous fixtures to function scope. Set the default fixture loop scope explicitly in order to avoid unexpected behavior in the future. Valid fixture loop scopes are: "function", "class", "module", "package", "session"
warnings.warn(PytestDeprecationWarning(_DEFAULT_FIXTURE_LOOP_SCOPE_UNSET))
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pytest_asyncio/plugin.py:207: PytestDeprecationWarning: The configuration option "asyncio_default_fixture_loop_scope" is unset.
The event loop scope for asynchronous fixtures will default to the fixture caching scope. Future versions of pytest-asyncio will default the loop scope for asynchronous fixtures to function scope. Set the default fixture loop scope explicitly in order to avoid unexpected behavior in the future. Valid fixture loop scopes are: "function", "class", "module", "package", "session"
warnings.warn(PytestDeprecationWarning(_DEFAULT_FIXTURE_LOOP_SCOPE_UNSET))
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pytest_asyncio/plugin.py:207: PytestDeprecationWarning: The configuration option "asyncio_default_fixture_loop_scope" is unset.
The event loop scope for asynchronous fixtures will default to the fixture caching scope. Future versions of pytest-asyncio will default the loop scope for asynchronous fixtures to function scope. Set the default fixture loop scope explicitly in order to avoid unexpected behavior in the future. Valid fixture loop scopes are: "function", "class", "module", "package", "session"
warnings.warn(PytestDeprecationWarning(_DEFAULT_FIXTURE_LOOP_SCOPE_UNSET))
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pytest_asyncio/plugin.py:207: PytestDeprecationWarning: The configuration option "asyncio_default_fixture_loop_scope" is unset.
The event loop scope for asynchronous fixtures will default to the fixture caching scope. Future versions of pytest-asyncio will default the loop scope for asynchronous fixtures to function scope. Set the default fixture loop scope explicitly in order to avoid unexpected behavior in the future. Valid fixture loop scopes are: "function", "class", "module", "package", "session"
warnings.warn(PytestDeprecationWarning(_DEFAULT_FIXTURE_LOOP_SCOPE_UNSET))
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pytest_asyncio/plugin.py:207: PytestDeprecationWarning: The configuration option "asyncio_default_fixture_loop_scope" is unset.
The event loop scope for asynchronous fixtures will default to the fixture caching scope. Future versions of pytest-asyncio will default the loop scope for asynchronous fixtures to function scope. Set the default fixture loop scope explicitly in order to avoid unexpected behavior in the future. Valid fixture loop scopes are: "function", "class", "module", "package", "session"
warnings.warn(PytestDeprecationWarning(_DEFAULT_FIXTURE_LOOP_SCOPE_UNSET))
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pytest_asyncio/plugin.py:207: PytestDeprecationWarning: The configuration option "asyncio_default_fixture_loop_scope" is unset.
The event loop scope for asynchronous fixtures will default to the fixture caching scope. Future versions of pytest-asyncio will default the loop scope for asynchronous fixtures to function scope. Set the default fixture loop scope explicitly in order to avoid unexpected behavior in the future. Valid fixture loop scopes are: "function", "class", "module", "package", "session"
warnings.warn(PytestDeprecationWarning(_DEFAULT_FIXTURE_LOOP_SCOPE_UNSET))
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pytest_asyncio/plugin.py:207: PytestDeprecationWarning: The configuration option "asyncio_default_fixture_loop_scope" is unset.
The event loop scope for asynchronous fixtures will default to the fixture caching scope. Future versions of pytest-asyncio will default the loop scope for asynchronous fixtures to function scope. Set the default fixture loop scope explicitly in order to avoid unexpected behavior in the future. Valid fixture loop scopes are: "function", "class", "module", "package", "session"
warnings.warn(PytestDeprecationWarning(_DEFAULT_FIXTURE_LOOP_SCOPE_UNSET))
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pytest_asyncio/plugin.py:207: PytestDeprecationWarning: The configuration option "asyncio_default_fixture_loop_scope" is unset.
The event loop scope for asynchronous fixtures will default to the fixture caching scope. Future versions of pytest-asyncio will default the loop scope for asynchronous fixtures to function scope. Set the default fixture loop scope explicitly in order to avoid unexpected behavior in the future. Valid fixture loop scopes are: "function", "class", "module", "package", "session"
warnings.warn(PytestDeprecationWarning(_DEFAULT_FIXTURE_LOOP_SCOPE_UNSET))
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pytest_asyncio/plugin.py:207: PytestDeprecationWarning: The configuration option "asyncio_default_fixture_loop_scope" is unset.
The event loop scope for asynchronous fixtures will default to the fixture caching scope. Future versions of pytest-asyncio will default the loop scope for asynchronous fixtures to function scope. Set the default fixture loop scope explicitly in order to avoid unexpected behavior in the future. Valid fixture loop scopes are: "function", "class", "module", "package", "session"
warnings.warn(PytestDeprecationWarning(_DEFAULT_FIXTURE_LOOP_SCOPE_UNSET))
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pytest_asyncio/plugin.py:207: PytestDeprecationWarning: The configuration option "asyncio_default_fixture_loop_scope" is unset.
The event loop scope for asynchronous fixtures will default to the fixture caching scope. Future versions of pytest-asyncio will default the loop scope for asynchronous fixtures to function scope. Set the default fixture loop scope explicitly in order to avoid unexpected behavior in the future. Valid fixture loop scopes are: "function", "class", "module", "package", "session"
warnings.warn(PytestDeprecationWarning(_DEFAULT_FIXTURE_LOOP_SCOPE_UNSET))
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pytest_asyncio/plugin.py:207: PytestDeprecationWarning: The configuration option "asyncio_default_fixture_loop_scope" is unset.
The event loop scope for asynchronous fixtures will default to the fixture caching scope. Future versions of pytest-asyncio will default the loop scope for asynchronous fixtures to function scope. Set the default fixture loop scope explicitly in order to avoid unexpected behavior in the future. Valid fixture loop scopes are: "function", "class", "module", "package", "session"
warnings.warn(PytestDeprecationWarning(_DEFAULT_FIXTURE_LOOP_SCOPE_UNSET))
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pytest_asyncio/plugin.py:207: PytestDeprecationWarning: The configuration option "asyncio_default_fixture_loop_scope" is unset.
The event loop scope for asynchronous fixtures will default to the fixture caching scope. Future versions of pytest-asyncio will default the loop scope for asynchronous fixtures to function scope. Set the default fixture loop scope explicitly in order to avoid unexpected behavior in the future. Valid fixture loop scopes are: "function", "class", "module", "package", "session"
warnings.warn(PytestDeprecationWarning(_DEFAULT_FIXTURE_LOOP_SCOPE_UNSET))
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pytest_asyncio/plugin.py:207: PytestDeprecationWarning: The configuration option "asyncio_default_fixture_loop_scope" is unset.
The event loop scope for asynchronous fixtures will default to the fixture caching scope. Future versions of pytest-asyncio will default the loop scope for asynchronous fixtures to function scope. Set the default fixture loop scope explicitly in order to avoid unexpected behavior in the future. Valid fixture loop scopes are: "function", "class", "module", "package", "session"
warnings.warn(PytestDeprecationWarning(_DEFAULT_FIXTURE_LOOP_SCOPE_UNSET))
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pytest_asyncio/plugin.py:207: PytestDeprecationWarning: The configuration option "asyncio_default_fixture_loop_scope" is unset.
The event loop scope for asynchronous fixtures will default to the fixture caching scope. Future versions of pytest-asyncio will default the loop scope for asynchronous fixtures to function scope. Set the default fixture loop scope explicitly in order to avoid unexpected behavior in the future. Valid fixture loop scopes are: "function", "class", "module", "package", "session"
warnings.warn(PytestDeprecationWarning(_DEFAULT_FIXTURE_LOOP_SCOPE_UNSET))
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pytest_asyncio/plugin.py:207: PytestDeprecationWarning: The configuration option "asyncio_default_fixture_loop_scope" is unset.
The event loop scope for asynchronous fixtures will default to the fixture caching scope. Future versions of pytest-asyncio will default the loop scope for asynchronous fixtures to function scope. Set the default fixture loop scope explicitly in order to avoid unexpected behavior in the future. Valid fixture loop scopes are: "function", "class", "module", "package", "session"
warnings.warn(PytestDeprecationWarning(_DEFAULT_FIXTURE_LOOP_SCOPE_UNSET))
16 workers [248 items]
scheduling tests via LoadScopeScheduling
tests/system/aiplatform/test_autologging.py::TestAutologging::test_autologging_with_autorun_creation
tests/system/aiplatform/test_custom_job.py::TestCustomJob::test_from_local_script_prebuilt_container
tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_create_get_list_featurestore
tests/system/aiplatform/test_initializer.py::TestInitializer::test_init_calls_set_google_auth_default
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_text_generation[grpc]
tests/system/aiplatform/test_experiments.py::TestExperiments::test_create_experiment
tests/system/aiplatform/test_experiment_model.py::TestExperimentModel::test_sklearn_model
tests/system/aiplatform/test_matching_engine_index.py::TestMatchingEngine::test_create_get_list_matching_engine_index
tests/system/aiplatform/test_e2e_forecasting.py::TestEndToEndForecasting3::test_end_to_end_forecasting[TemporalFusionTransformerForecastingTrainingJob]
tests/system/aiplatform/test_e2e_forecasting.py::TestEndToEndForecasting1::test_end_to_end_forecasting[AutoMLForecastingTrainingJob]
tests/system/aiplatform/test_e2e_forecasting.py::TestEndToEndForecasting4::test_end_to_end_forecasting[TimeSeriesDenseEncoderForecastingTrainingJob]
tests/system/aiplatform/test_e2e_tabular.py::TestEndToEndTabular::test_end_to_end_tabular
tests/system/aiplatform/test_e2e_metadata_schema.py::TestMetadataSchema::test_system_dataset_artifact_create
tests/system/aiplatform/test_e2e_forecasting.py::TestEndToEndForecasting2::test_end_to_end_forecasting[SequenceToSequencePlusForecastingTrainingJob]
[gw13] [ 0%] PASSED tests/system/aiplatform/test_initializer.py::TestInitializer::test_init_calls_set_google_auth_default
tests/system/aiplatform/test_dataset.py::TestDataset::test_get_existing_dataset
tests/system/aiplatform/test_batch_prediction.py::TestBatchPredictionJob::test_model_monitoring
[gw3] [ 0%] SKIPPED tests/system/aiplatform/test_dataset.py::TestDataset::test_get_existing_dataset
tests/system/aiplatform/test_dataset.py::TestDataset::test_get_nonexistent_dataset
tests/system/aiplatform/test_initializer.py::TestInitializer::test_init_rest_async_incorrect_credentials
[gw13] [ 1%] PASSED tests/system/aiplatform/test_initializer.py::TestInitializer::test_init_rest_async_incorrect_credentials
tests/system/aiplatform/test_pipeline_job.py::TestPipelineJob::test_add_pipeline_job_to_experiment
[gw3] [ 1%] PASSED tests/system/aiplatform/test_dataset.py::TestDataset::test_get_nonexistent_dataset
tests/system/aiplatform/test_dataset.py::TestDataset::test_get_new_dataset_and_import
[gw8] [ 2%] PASSED tests/system/aiplatform/test_e2e_metadata_schema.py::TestMetadataSchema::test_system_dataset_artifact_create
tests/system/aiplatform/test_e2e_metadata_schema.py::TestMetadataSchema::test_google_dataset_artifact_create
[gw14] [ 2%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_text_generation[grpc]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_text_generation[rest]
[gw8] [ 2%] PASSED tests/system/aiplatform/test_e2e_metadata_schema.py::TestMetadataSchema::test_google_dataset_artifact_create
tests/system/aiplatform/test_e2e_metadata_schema.py::TestMetadataSchema::test_execution_create_using_system_schema_class
[gw8] [ 3%] PASSED tests/system/aiplatform/test_e2e_metadata_schema.py::TestMetadataSchema::test_execution_create_using_system_schema_class
[gw14] [ 3%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_text_generation[rest]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_text_generation_preview_count_tokens[grpc]
tests/system/aiplatform/test_project_id_inference.py::TestProjectIDInference::test_project_id_inference
[gw11] [ 4%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_create_experiment
tests/system/aiplatform/test_experiments.py::TestExperiments::test_get_experiment
[gw14] [ 4%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_text_generation_preview_count_tokens[grpc]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_text_generation_preview_count_tokens[rest]
[gw11] [ 4%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_get_experiment
tests/system/aiplatform/test_experiments.py::TestExperiments::test_start_run
[gw14] [ 5%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_text_generation_preview_count_tokens[rest]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_text_generation_model_predict_async
[gw14] [ 5%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_text_generation_model_predict_async
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_text_generation_streaming[grpc]
[gw14] [ 6%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_text_generation_streaming[grpc]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_text_generation_streaming[rest]
[gw10] [ 6%] PASSED tests/system/aiplatform/test_experiment_model.py::TestExperimentModel::test_sklearn_model
tests/system/aiplatform/test_experiment_model.py::TestExperimentModel::test_xgboost_booster_with_custom_uri
[gw11] [ 6%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_start_run
tests/system/aiplatform/test_experiments.py::TestExperiments::test_get_run
[gw14] [ 7%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_text_generation_streaming[rest]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_preview_text_generation_from_pretrained[grpc]
[gw11] [ 7%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_get_run
tests/system/aiplatform/test_experiments.py::TestExperiments::test_log_params
[gw10] [ 8%] FAILED tests/system/aiplatform/test_experiment_model.py::TestExperimentModel::test_xgboost_booster_with_custom_uri
tests/system/aiplatform/test_experiment_model.py::TestExperimentModel::test_xgboost_xgbmodel_with_custom_names
[gw14] [ 8%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_preview_text_generation_from_pretrained[grpc]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_preview_text_generation_from_pretrained[rest]
[gw10] [ 8%] FAILED tests/system/aiplatform/test_experiment_model.py::TestExperimentModel::test_xgboost_xgbmodel_with_custom_names
tests/system/aiplatform/test_experiment_model.py::TestExperimentModel::test_tensorflow_keras_model_with_input_example
[gw14] [ 9%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_preview_text_generation_from_pretrained[rest]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_chat_on_chat_model[grpc]
[gw11] [ 9%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_log_params
tests/system/aiplatform/test_experiments.py::TestExperiments::test_log_metrics
[gw14] [ 10%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_chat_on_chat_model[grpc]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_chat_on_chat_model[rest]
[gw0] [ 10%] FAILED tests/system/aiplatform/test_autologging.py::TestAutologging::test_autologging_with_autorun_creation
tests/system/aiplatform/test_autologging.py::TestAutologging::test_autologging_with_manual_run_creation
[gw11] [ 10%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_log_metrics
tests/system/aiplatform/test_experiments.py::TestExperiments::test_log_time_series_metrics
[gw14] [ 11%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_chat_on_chat_model[rest]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_chat_model_preview_count_tokens[grpc]
[gw10] [ 11%] FAILED tests/system/aiplatform/test_experiment_model.py::TestExperimentModel::test_tensorflow_keras_model_with_input_example
tests/system/aiplatform/test_experiment_model.py::TestExperimentModel::test_tensorflow_module_with_gpu_container
[gw14] [ 12%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_chat_model_preview_count_tokens[grpc]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_chat_model_preview_count_tokens[rest]
[gw14] [ 12%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_chat_model_preview_count_tokens[rest]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_chat_model_async
[gw11] [ 12%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_log_time_series_metrics
tests/system/aiplatform/test_experiments.py::TestExperiments::test_get_time_series_data_frame_batch_read_success
[gw14] [ 13%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_chat_model_async
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_chat_model_send_message_streaming[grpc]
[gw14] [ 13%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_chat_model_send_message_streaming[grpc]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_chat_model_send_message_streaming[rest]
[gw10] [ 14%] FAILED tests/system/aiplatform/test_experiment_model.py::TestExperimentModel::test_tensorflow_module_with_gpu_container
tests/system/aiplatform/test_experiment_model.py::TestExperimentModel::test_deploy_model_with_cpu_container
[gw14] [ 14%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_chat_model_send_message_streaming[rest]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_text_embedding[grpc]
[gw14] [ 14%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_text_embedding[grpc]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_text_embedding[rest]
[gw14] [ 15%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_text_embedding[rest]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_text_embedding_async
[gw14] [ 15%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_text_embedding_async
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_tuning[grpc]
[gw14] [ 16%] SKIPPED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_tuning[grpc]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_tuning[rest]
[gw14] [ 16%] SKIPPED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_tuning[rest]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_batch_prediction_for_text_generation[grpc]
[gw0] [ 16%] PASSED tests/system/aiplatform/test_autologging.py::TestAutologging::test_autologging_with_manual_run_creation
tests/system/aiplatform/test_autologging.py::TestAutologging::test_autologging_nested_run_model
[gw11] [ 17%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_get_time_series_data_frame_batch_read_success
tests/system/aiplatform/test_experiments.py::TestExperiments::test_log_classification_metrics
[gw0] [ 17%] PASSED tests/system/aiplatform/test_autologging.py::TestAutologging::test_autologging_nested_run_model
[gw11] [ 18%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_log_classification_metrics
tests/system/aiplatform/test_experiments.py::TestExperiments::test_log_model
[gw11] [ 18%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_log_model
tests/system/aiplatform/test_experiments.py::TestExperiments::test_create_artifact
[gw11] [ 18%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_create_artifact
tests/system/aiplatform/test_experiments.py::TestExperiments::test_get_artifact_by_uri
[gw11] [ 19%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_get_artifact_by_uri
tests/system/aiplatform/test_experiments.py::TestExperiments::test_log_execution_and_artifact
[gw14] [ 19%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_batch_prediction_for_text_generation[grpc]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_batch_prediction_for_text_generation[rest]
tests/system/aiplatform/test_tensorboard.py::TestTensorboard::test_create_and_get_tensorboard
[gw11] [ 20%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_log_execution_and_artifact
tests/system/aiplatform/test_experiments.py::TestExperiments::test_end_run
[gw11] [ 20%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_end_run
tests/system/aiplatform/test_experiments.py::TestExperiments::test_run_context_manager
[gw11] [ 20%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_run_context_manager
tests/system/aiplatform/test_experiments.py::TestExperiments::test_add_pipeline_job_to_experiment
[gw0] [ 21%] PASSED tests/system/aiplatform/test_tensorboard.py::TestTensorboard::test_create_and_get_tensorboard
tests/system/aiplatform/test_tensorboard.py::TestTensorboard::test_create_and_get_tensorboard_experiment
[gw0] [ 21%] PASSED tests/system/aiplatform/test_tensorboard.py::TestTensorboard::test_create_and_get_tensorboard_experiment
tests/system/aiplatform/test_tensorboard.py::TestTensorboard::test_create_and_get_tensorboard_run
[gw0] [ 22%] PASSED tests/system/aiplatform/test_tensorboard.py::TestTensorboard::test_create_and_get_tensorboard_run
tests/system/aiplatform/test_tensorboard.py::TestTensorboard::test_create_and_get_tensorboard_time_series
[gw0] [ 22%] PASSED tests/system/aiplatform/test_tensorboard.py::TestTensorboard::test_create_and_get_tensorboard_time_series
tests/system/aiplatform/test_tensorboard.py::TestTensorboard::test_write_tensorboard_scalar_data
[gw0] [ 22%] PASSED tests/system/aiplatform/test_tensorboard.py::TestTensorboard::test_write_tensorboard_scalar_data
tests/system/vertex_ray/test_cluster_management.py::TestClusterManagement::test_cluster_management[2.9]
[gw14] [ 23%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_batch_prediction_for_text_generation[rest]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_batch_prediction_for_textembedding[grpc]
[gw13] [ 23%] PASSED tests/system/aiplatform/test_pipeline_job.py::TestPipelineJob::test_add_pipeline_job_to_experiment
tests/system/aiplatform/test_prediction_cpr.py::TestPredictionCpr::test_build_cpr_model_upload_and_deploy
[gw14] [ 24%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_batch_prediction_for_textembedding[grpc]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_batch_prediction_for_textembedding[rest]
[gw14] [ 24%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_batch_prediction_for_textembedding[rest]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_batch_prediction_for_code_generation[grpc]
[gw11] [ 25%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_add_pipeline_job_to_experiment
tests/system/aiplatform/test_experiments.py::TestExperiments::test_get_experiments_df
[gw11] [ 25%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_get_experiments_df
tests/system/aiplatform/test_experiments.py::TestExperiments::test_get_experiments_df_include_time_series_false
[gw11] [ 25%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_get_experiments_df_include_time_series_false
tests/system/aiplatform/test_experiments.py::TestExperiments::test_delete_run_does_not_exist_raises_exception
[gw11] [ 26%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_delete_run_does_not_exist_raises_exception
tests/system/aiplatform/test_experiments.py::TestExperiments::test_delete_run_success
[gw14] [ 26%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_batch_prediction_for_code_generation[grpc]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_batch_prediction_for_code_generation[rest]
[gw11] [ 27%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_delete_run_success
tests/system/aiplatform/test_experiments.py::TestExperiments::test_reuse_run_success
[gw11] [ 27%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_reuse_run_success
tests/system/aiplatform/test_experiments.py::TestExperiments::test_delete_run_then_tensorboard_success
[gw11] [ 27%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_delete_run_then_tensorboard_success
tests/system/aiplatform/test_experiments.py::TestExperiments::test_delete_wout_backing_tensorboard_reuse_run_raises_exception
[gw11] [ 28%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_delete_wout_backing_tensorboard_reuse_run_raises_exception
tests/system/aiplatform/test_experiments.py::TestExperiments::test_delete_experiment_does_not_exist_raises_exception
[gw11] [ 28%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_delete_experiment_does_not_exist_raises_exception
tests/system/aiplatform/test_experiments.py::TestExperiments::test_init_associates_global_tensorboard_to_experiment
[gw11] [ 29%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_init_associates_global_tensorboard_to_experiment
tests/system/aiplatform/test_experiments.py::TestExperiments::test_get_backing_tensorboard_resource_returns_tensorboard
[gw11] [ 29%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_get_backing_tensorboard_resource_returns_tensorboard
tests/system/aiplatform/test_experiments.py::TestExperiments::test_get_backing_tensorboard_resource_returns_none
[gw11] [ 29%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_get_backing_tensorboard_resource_returns_none
tests/system/aiplatform/test_experiments.py::TestExperiments::test_delete_backing_tensorboard_experiment_run_success
[gw14] [ 30%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_batch_prediction_for_code_generation[rest]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_code_generation_streaming[grpc]
[gw11] [ 30%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_delete_backing_tensorboard_experiment_run_success
[gw14] [ 31%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_code_generation_streaming[grpc]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_code_generation_streaming[rest]
tests/system/vertexai/test_batch_prediction.py::TestBatchPrediction::test_batch_prediction_with_gcs_input
[gw14] [ 31%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_code_generation_streaming[rest]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_code_chat_model_send_message_streaming[grpc]
[gw14] [ 31%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_code_chat_model_send_message_streaming[grpc]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_code_chat_model_send_message_streaming[rest]
[gw14] [ 32%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_code_chat_model_send_message_streaming[rest]
tests/system/vertexai/test_prompts.py::TestPrompts::test_create_prompt_with_variables
[gw14] [ 32%] PASSED tests/system/vertexai/test_prompts.py::TestPrompts::test_create_prompt_with_variables
tests/system/vertexai/test_prompts.py::TestPrompts::test_create_prompt_with_function_calling
[gw14] [ 33%] PASSED tests/system/vertexai/test_prompts.py::TestPrompts::test_create_prompt_with_function_calling
tests/system/vertexai/test_prompts.py::TestPrompts::test_get_prompt_with_variables
[gw14] [ 33%] PASSED tests/system/vertexai/test_prompts.py::TestPrompts::test_get_prompt_with_variables
tests/system/vertexai/test_prompts.py::TestPrompts::test_get_prompt_with_function_calling
[gw14] [ 33%] PASSED tests/system/vertexai/test_prompts.py::TestPrompts::test_get_prompt_with_function_calling
tests/system/vertexai/test_reasoning_engines.py::TestReasoningEngines::test_langchain_template
[gw13] [ 34%] FAILED tests/system/aiplatform/test_prediction_cpr.py::TestPredictionCpr::test_build_cpr_model_upload_and_deploy
tests/system/aiplatform/test_private_endpoint.py::TestPrivateEndpoint::test_create_deploy_delete_private_endpoint
[gw8] [ 34%] PASSED tests/system/aiplatform/test_project_id_inference.py::TestProjectIDInference::test_project_id_inference
tests/system/aiplatform/test_telemetry.py::TestTelemetry::test_single_context_manager
[gw8] [ 35%] PASSED tests/system/aiplatform/test_telemetry.py::TestTelemetry::test_single_context_manager
tests/system/aiplatform/test_telemetry.py::TestTelemetry::test_nested_context_manager
[gw8] [ 35%] PASSED tests/system/aiplatform/test_telemetry.py::TestTelemetry::test_nested_context_manager
[gw1] [ 35%] PASSED tests/system/aiplatform/test_custom_job.py::TestCustomJob::test_from_local_script_prebuilt_container
tests/system/aiplatform/test_custom_job.py::TestCustomJob::test_from_local_script_custom_container
[gw12] [ 36%] PASSED tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_create_get_list_featurestore
tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_create_get_list_entity_types
[gw12] [ 36%] PASSED tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_create_get_list_entity_types
tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_create_get_list_features
[gw12] [ 37%] PASSED tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_create_get_list_features
tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_ingest_feature_values
[gw11] [ 37%] PASSED tests/system/vertexai/test_batch_prediction.py::TestBatchPrediction::test_batch_prediction_with_gcs_input
tests/system/vertexai/test_batch_prediction.py::TestBatchPrediction::test_batch_prediction_with_bq_input
[gw3] [ 37%] PASSED tests/system/aiplatform/test_dataset.py::TestDataset::test_get_new_dataset_and_import
tests/system/aiplatform/test_dataset.py::TestDataset::test_create_and_import_image_dataset
[gw14] [ 38%] PASSED tests/system/vertexai/test_reasoning_engines.py::TestReasoningEngines::test_langchain_template
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_local[_get_tokenizer_for_model_preview-gemini-1.0-pro-udhr-udhr-PROD_ENDPOINT]
[gw14] [ 38%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_local[_get_tokenizer_for_model_preview-gemini-1.0-pro-udhr-udhr-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_local[_get_tokenizer_for_model_preview-gemini-1.5-pro-udhr-udhr-PROD_ENDPOINT]
[gw13] [ 39%] PASSED tests/system/aiplatform/test_private_endpoint.py::TestPrivateEndpoint::test_create_deploy_delete_private_endpoint
tests/system/vertex_ray/test_ray_data.py::TestRayData::test_ray_data[2.9]
[gw14] [ 39%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_local[_get_tokenizer_for_model_preview-gemini-1.5-pro-udhr-udhr-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_local[_get_tokenizer_for_model_preview-gemini-1.5-flash-udhr-udhr-PROD_ENDPOINT]
[gw1] [ 39%] PASSED tests/system/aiplatform/test_custom_job.py::TestCustomJob::test_from_local_script_custom_container
tests/system/aiplatform/test_custom_job.py::TestCustomJob::test_from_local_script_enable_autolog_prebuilt_container
[gw14] [ 40%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_local[_get_tokenizer_for_model_preview-gemini-1.5-flash-udhr-udhr-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_local[_get_tokenizer_for_model_preview-gemini-1.5-flash-002-udhr-udhr-PROD_ENDPOINT]
[gw14] [ 40%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_local[_get_tokenizer_for_model_preview-gemini-1.5-flash-002-udhr-udhr-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_local[_get_tokenizer_for_model_preview-gemini-1.5-pro-002-udhr-udhr-PROD_ENDPOINT]
[gw11] [ 41%] PASSED tests/system/vertexai/test_batch_prediction.py::TestBatchPrediction::test_batch_prediction_with_bq_input
[gw14] [ 41%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_local[_get_tokenizer_for_model_preview-gemini-1.5-pro-002-udhr-udhr-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_local[get_tokenizer_for_model-gemini-1.0-pro-udhr-udhr-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_with_cached_content_from_text[grpc-PROD_ENDPOINT]
[gw14] [ 41%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_local[get_tokenizer_for_model-gemini-1.0-pro-udhr-udhr-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_local[get_tokenizer_for_model-gemini-1.5-pro-udhr-udhr-PROD_ENDPOINT]
[gw0] [ 42%] PASSED tests/system/vertex_ray/test_cluster_management.py::TestClusterManagement::test_cluster_management[2.9]
tests/system/vertex_ray/test_cluster_management.py::TestClusterManagement::test_cluster_management[2.33]
[gw14] [ 42%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_local[get_tokenizer_for_model-gemini-1.5-pro-udhr-udhr-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_local[get_tokenizer_for_model-gemini-1.5-flash-udhr-udhr-PROD_ENDPOINT]
[gw14] [ 43%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_local[get_tokenizer_for_model-gemini-1.5-flash-udhr-udhr-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_local[get_tokenizer_for_model-gemini-1.5-flash-002-udhr-udhr-PROD_ENDPOINT]
[gw14] [ 43%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_local[get_tokenizer_for_model-gemini-1.5-flash-002-udhr-udhr-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_local[get_tokenizer_for_model-gemini-1.5-pro-002-udhr-udhr-PROD_ENDPOINT]
[gw12] [ 43%] PASSED tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_ingest_feature_values
tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_batch_create_features
[gw12] [ 44%] PASSED tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_batch_create_features
tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_ingest_feature_values_from_df_using_feature_time_column_and_online_read_multiple_entities
[gw11] [ 44%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_with_cached_content_from_text[grpc-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_with_cached_content_from_text[rest-PROD_ENDPOINT]
[gw14] [ 45%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_local[get_tokenizer_for_model-gemini-1.5-pro-002-udhr-udhr-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_compute_tokens[_get_tokenizer_for_model_preview-gemini-1.0-pro-udhr-udhr-PROD_ENDPOINT]
[gw14] [ 45%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_compute_tokens[_get_tokenizer_for_model_preview-gemini-1.0-pro-udhr-udhr-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_compute_tokens[_get_tokenizer_for_model_preview-gemini-1.5-pro-udhr-udhr-PROD_ENDPOINT]
[gw11] [ 45%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_with_cached_content_from_text[rest-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_text[grpc-PROD_ENDPOINT]
[gw11] [ 46%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_text[grpc-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_text[rest-PROD_ENDPOINT]
[gw14] [ 46%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_compute_tokens[_get_tokenizer_for_model_preview-gemini-1.5-pro-udhr-udhr-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_compute_tokens[_get_tokenizer_for_model_preview-gemini-1.5-flash-udhr-udhr-PROD_ENDPOINT]
[gw11] [ 47%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_text[rest-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_latency[grpc-PROD_ENDPOINT]
[gw11] [ 47%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_latency[grpc-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_latency[rest-PROD_ENDPOINT]
[gw11] [ 47%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_latency[rest-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_async[grpc-PROD_ENDPOINT]
[gw11] [ 48%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_async[grpc-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_async[rest-PROD_ENDPOINT]
[gw11] [ 48%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_async[rest-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_streaming[grpc-PROD_ENDPOINT]
[gw11] [ 49%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_streaming[grpc-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_streaming[rest-PROD_ENDPOINT]
[gw11] [ 49%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_streaming[rest-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_streaming_async[grpc-PROD_ENDPOINT]
[gw11] [ 50%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_streaming_async[grpc-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_streaming_async[rest-PROD_ENDPOINT]
[gw11] [ 50%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_streaming_async[rest-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_with_parameters[grpc-PROD_ENDPOINT]
[gw11] [ 50%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_with_parameters[grpc-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_with_parameters[rest-PROD_ENDPOINT]
[gw11] [ 51%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_with_parameters[rest-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_with_gemini_15_parameters[grpc-PROD_ENDPOINT]
[gw14] [ 51%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_compute_tokens[_get_tokenizer_for_model_preview-gemini-1.5-flash-udhr-udhr-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_compute_tokens[_get_tokenizer_for_model_preview-gemini-1.5-flash-002-udhr-udhr-PROD_ENDPOINT]
[gw11] [ 52%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_with_gemini_15_parameters[grpc-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_with_gemini_15_parameters[rest-PROD_ENDPOINT]
[gw11] [ 52%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_with_gemini_15_parameters[rest-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_list_of_content_dict[grpc-PROD_ENDPOINT]
[gw2] [ 52%] PASSED tests/system/aiplatform/test_batch_prediction.py::TestBatchPredictionJob::test_model_monitoring
tests/system/aiplatform/test_model_evaluation.py::TestModelEvaluationJob::test_model_evaluate_custom_tabular_model
[gw11] [ 53%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_list_of_content_dict[grpc-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_list_of_content_dict[rest-PROD_ENDPOINT]
[gw11] [ 53%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_list_of_content_dict[rest-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_remote_image[grpc-PROD_ENDPOINT]
[gw11] [ 54%] SKIPPED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_remote_image[grpc-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_remote_image[rest-PROD_ENDPOINT]
[gw11] [ 54%] SKIPPED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_remote_image[rest-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_text_and_remote_image[grpc-PROD_ENDPOINT]
[gw11] [ 54%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_text_and_remote_image[grpc-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_text_and_remote_image[rest-PROD_ENDPOINT]
[gw11] [ 55%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_text_and_remote_image[rest-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_text_and_remote_video[grpc-PROD_ENDPOINT]
[gw11] [ 55%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_text_and_remote_video[grpc-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_text_and_remote_video[rest-PROD_ENDPOINT]
[gw11] [ 56%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_text_and_remote_video[rest-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_text_and_remote_audio[grpc-PROD_ENDPOINT]
[gw11] [ 56%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_text_and_remote_audio[grpc-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_text_and_remote_audio[rest-PROD_ENDPOINT]
[gw14] [ 56%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_compute_tokens[_get_tokenizer_for_model_preview-gemini-1.5-flash-002-udhr-udhr-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_compute_tokens[_get_tokenizer_for_model_preview-gemini-1.5-pro-002-udhr-udhr-PROD_ENDPOINT]
[gw10] [ 57%] PASSED tests/system/aiplatform/test_experiment_model.py::TestExperimentModel::test_deploy_model_with_cpu_container
tests/system/aiplatform/test_experiment_model.py::TestExperimentModel::test_deploy_model_with_gpu_container
[gw10] [ 57%] FAILED tests/system/aiplatform/test_experiment_model.py::TestExperimentModel::test_deploy_model_with_gpu_container
[gw11] [ 58%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_text_and_remote_audio[rest-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_grounding_google_search_retriever[grpc-PROD_ENDPOINT]
tests/system/aiplatform/test_vizier.py::TestVizier::test_vizier_lifecycle
[gw11] [ 58%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_grounding_google_search_retriever[grpc-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_grounding_google_search_retriever[rest-PROD_ENDPOINT]
[gw10] [ 58%] PASSED tests/system/aiplatform/test_vizier.py::TestVizier::test_vizier_lifecycle
tests/system/aiplatform/test_vizier.py::TestVizier::test_vizier_study_deletion
[gw10] [ 59%] PASSED tests/system/aiplatform/test_vizier.py::TestVizier::test_vizier_study_deletion
tests/system/aiplatform/test_vizier.py::TestVizier::test_vizier_trial_deletion
[gw10] [ 59%] PASSED tests/system/aiplatform/test_vizier.py::TestVizier::test_vizier_trial_deletion
[gw11] [ 60%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_grounding_google_search_retriever[rest-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_grounding_google_search_retriever_with_dynamic_retrieval[grpc-PROD_ENDPOINT]
[gw11] [ 60%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_grounding_google_search_retriever_with_dynamic_retrieval[grpc-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_grounding_google_search_retriever_with_dynamic_retrieval[rest-PROD_ENDPOINT]
[gw11] [ 60%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_grounding_google_search_retriever_with_dynamic_retrieval[rest-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_send_message_from_text[grpc-PROD_ENDPOINT]
[gw11] [ 61%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_send_message_from_text[grpc-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_send_message_from_text[rest-PROD_ENDPOINT]
[gw11] [ 61%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_send_message_from_text[rest-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_chat_function_calling[grpc-PROD_ENDPOINT]
[gw11] [ 62%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_chat_function_calling[grpc-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_chat_function_calling[rest-PROD_ENDPOINT]
[gw11] [ 62%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_chat_function_calling[rest-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_function_calling[grpc-PROD_ENDPOINT]
[gw11] [ 62%] FAILED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_function_calling[grpc-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_function_calling[rest-PROD_ENDPOINT]
[gw11] [ 63%] FAILED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_function_calling[rest-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_model_router[grpc-PROD_ENDPOINT]
[gw14] [ 63%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_compute_tokens[_get_tokenizer_for_model_preview-gemini-1.5-pro-002-udhr-udhr-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_compute_tokens[get_tokenizer_for_model-gemini-1.0-pro-udhr-udhr-PROD_ENDPOINT]
[gw11] [ 64%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_model_router[grpc-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_model_router[rest-PROD_ENDPOINT]
[gw1] [ 64%] PASSED tests/system/aiplatform/test_custom_job.py::TestCustomJob::test_from_local_script_enable_autolog_prebuilt_container
tests/system/aiplatform/test_custom_job.py::TestCustomJob::test_from_local_script_enable_autolog_custom_container
[gw11] [ 64%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_model_router[rest-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_chat_automatic_function_calling[grpc-PROD_ENDPOINT]
[gw11] [ 65%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_chat_automatic_function_calling[grpc-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_chat_automatic_function_calling[rest-PROD_ENDPOINT]
[gw11] [ 65%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_chat_automatic_function_calling[rest-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_additional_request_metadata[grpc-PROD_ENDPOINT]
[gw11] [ 66%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_additional_request_metadata[grpc-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_additional_request_metadata[rest-PROD_ENDPOINT]
[gw11] [ 66%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_additional_request_metadata[rest-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_compute_tokens_from_text[grpc-PROD_ENDPOINT]
[gw11] [ 66%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_compute_tokens_from_text[grpc-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_compute_tokens_from_text[rest-PROD_ENDPOINT]
[gw11] [ 67%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_compute_tokens_from_text[rest-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_count_tokens_from_text[grpc-PROD_ENDPOINT]
[gw11] [ 67%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_count_tokens_from_text[grpc-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_count_tokens_from_text[rest-PROD_ENDPOINT]
[gw11] [ 68%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_count_tokens_from_text[rest-PROD_ENDPOINT]
[gw14] [ 68%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_compute_tokens[get_tokenizer_for_model-gemini-1.0-pro-udhr-udhr-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_compute_tokens[get_tokenizer_for_model-gemini-1.5-pro-udhr-udhr-PROD_ENDPOINT]
[gw14] [ 68%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_compute_tokens[get_tokenizer_for_model-gemini-1.5-pro-udhr-udhr-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_compute_tokens[get_tokenizer_for_model-gemini-1.5-flash-udhr-udhr-PROD_ENDPOINT]
[gw14] [ 69%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_compute_tokens[get_tokenizer_for_model-gemini-1.5-flash-udhr-udhr-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_compute_tokens[get_tokenizer_for_model-gemini-1.5-flash-002-udhr-udhr-PROD_ENDPOINT]
[gw3] [ 69%] PASSED tests/system/aiplatform/test_dataset.py::TestDataset::test_create_and_import_image_dataset
tests/system/aiplatform/test_dataset.py::TestDataset::test_create_tabular_dataset
[gw3] [ 70%] PASSED tests/system/aiplatform/test_dataset.py::TestDataset::test_create_tabular_dataset
tests/system/aiplatform/test_dataset.py::TestDataset::test_create_tabular_dataset_from_dataframe
[gw14] [ 70%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_compute_tokens[get_tokenizer_for_model-gemini-1.5-flash-002-udhr-udhr-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_compute_tokens[get_tokenizer_for_model-gemini-1.5-pro-002-udhr-udhr-PROD_ENDPOINT]
[gw1] [ 70%] PASSED tests/system/aiplatform/test_custom_job.py::TestCustomJob::test_from_local_script_enable_autolog_custom_container
[gw3] [ 71%] PASSED tests/system/aiplatform/test_dataset.py::TestDataset::test_create_tabular_dataset_from_dataframe
tests/system/aiplatform/test_dataset.py::TestDataset::test_create_tabular_dataset_from_dataframe_with_provided_schema
[gw3] [ 71%] PASSED tests/system/aiplatform/test_dataset.py::TestDataset::test_create_tabular_dataset_from_dataframe_with_provided_schema
tests/system/aiplatform/test_dataset.py::TestDataset::test_create_time_series_dataset
[gw12] [ 72%] PASSED tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_ingest_feature_values_from_df_using_feature_time_column_and_online_read_multiple_entities
tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_ingest_feature_values_from_df_using_feature_time_datetime_and_online_read_single_entity
[gw3] [ 72%] PASSED tests/system/aiplatform/test_dataset.py::TestDataset::test_create_time_series_dataset
tests/system/aiplatform/test_dataset.py::TestDataset::test_export_data
[gw3] [ 72%] PASSED tests/system/aiplatform/test_dataset.py::TestDataset::test_export_data
tests/system/aiplatform/test_dataset.py::TestDataset::test_export_data_for_custom_training
[gw3] [ 73%] PASSED tests/system/aiplatform/test_dataset.py::TestDataset::test_export_data_for_custom_training
tests/system/aiplatform/test_dataset.py::TestDataset::test_update_dataset
[gw3] [ 73%] PASSED tests/system/aiplatform/test_dataset.py::TestDataset::test_update_dataset
[gw14] [ 74%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_compute_tokens[get_tokenizer_for_model-gemini-1.5-pro-002-udhr-udhr-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction[gemini-1.0-pro-PROD_ENDPOINT]
[gw14] [ 74%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction[gemini-1.0-pro-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction[gemini-1.5-pro-PROD_ENDPOINT]
[gw14] [ 75%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction[gemini-1.5-pro-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction[gemini-1.5-flash-PROD_ENDPOINT]
[gw14] [ 75%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction[gemini-1.5-flash-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction[gemini-1.5-flash-002-PROD_ENDPOINT]
[gw14] [ 75%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction[gemini-1.5-flash-002-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction[gemini-1.5-pro-002-PROD_ENDPOINT]
[gw14] [ 76%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction[gemini-1.5-pro-002-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction_is_function_call[gemini-1.0-pro-PROD_ENDPOINT]
[gw14] [ 76%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction_is_function_call[gemini-1.0-pro-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction_is_function_call[gemini-1.5-pro-PROD_ENDPOINT]
[gw14] [ 77%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction_is_function_call[gemini-1.5-pro-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction_is_function_call[gemini-1.5-flash-PROD_ENDPOINT]
[gw14] [ 77%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction_is_function_call[gemini-1.5-flash-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction_is_function_call[gemini-1.5-flash-002-PROD_ENDPOINT]
[gw14] [ 77%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction_is_function_call[gemini-1.5-flash-002-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction_is_function_call[gemini-1.5-pro-002-PROD_ENDPOINT]
[gw14] [ 78%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction_is_function_call[gemini-1.5-pro-002-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction_is_function_response[gemini-1.0-pro-PROD_ENDPOINT]
[gw14] [ 78%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction_is_function_response[gemini-1.0-pro-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction_is_function_response[gemini-1.5-pro-PROD_ENDPOINT]
[gw14] [ 79%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction_is_function_response[gemini-1.5-pro-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction_is_function_response[gemini-1.5-flash-PROD_ENDPOINT]
[gw14] [ 79%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction_is_function_response[gemini-1.5-flash-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction_is_function_response[gemini-1.5-flash-002-PROD_ENDPOINT]
[gw14] [ 79%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction_is_function_response[gemini-1.5-flash-002-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction_is_function_response[gemini-1.5-pro-002-PROD_ENDPOINT]
[gw14] [ 80%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction_is_function_response[gemini-1.5-pro-002-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_tool_is_function_declaration[gemini-1.0-pro-PROD_ENDPOINT]
[gw14] [ 80%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_tool_is_function_declaration[gemini-1.0-pro-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_tool_is_function_declaration[gemini-1.5-pro-PROD_ENDPOINT]
[gw14] [ 81%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_tool_is_function_declaration[gemini-1.5-pro-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_tool_is_function_declaration[gemini-1.5-flash-PROD_ENDPOINT]
[gw14] [ 81%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_tool_is_function_declaration[gemini-1.5-flash-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_tool_is_function_declaration[gemini-1.5-flash-002-PROD_ENDPOINT]
[gw14] [ 81%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_tool_is_function_declaration[gemini-1.5-flash-002-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_tool_is_function_declaration[gemini-1.5-pro-002-PROD_ENDPOINT]
[gw14] [ 82%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_tool_is_function_declaration[gemini-1.5-pro-002-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_content_is_function_call[gemini-1.0-pro-PROD_ENDPOINT]
[gw14] [ 82%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_content_is_function_call[gemini-1.0-pro-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_content_is_function_call[gemini-1.5-pro-PROD_ENDPOINT]
[gw14] [ 83%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_content_is_function_call[gemini-1.5-pro-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_content_is_function_call[gemini-1.5-flash-PROD_ENDPOINT]
[gw14] [ 83%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_content_is_function_call[gemini-1.5-flash-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_content_is_function_call[gemini-1.5-flash-002-PROD_ENDPOINT]
[gw14] [ 83%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_content_is_function_call[gemini-1.5-flash-002-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_content_is_function_call[gemini-1.5-pro-002-PROD_ENDPOINT]
[gw14] [ 84%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_content_is_function_call[gemini-1.5-pro-002-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_content_is_function_response[gemini-1.0-pro-PROD_ENDPOINT]
[gw14] [ 84%] FAILED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_content_is_function_response[gemini-1.0-pro-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_content_is_function_response[gemini-1.5-pro-PROD_ENDPOINT]
[gw14] [ 85%] FAILED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_content_is_function_response[gemini-1.5-pro-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_content_is_function_response[gemini-1.5-flash-PROD_ENDPOINT]
[gw14] [ 85%] FAILED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_content_is_function_response[gemini-1.5-flash-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_content_is_function_response[gemini-1.5-flash-002-PROD_ENDPOINT]
[gw14] [ 85%] FAILED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_content_is_function_response[gemini-1.5-flash-002-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_content_is_function_response[gemini-1.5-pro-002-PROD_ENDPOINT]
[gw14] [ 86%] FAILED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_content_is_function_response[gemini-1.5-pro-002-PROD_ENDPOINT]
[gw12] [ 86%] PASSED tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_ingest_feature_values_from_df_using_feature_time_datetime_and_online_read_single_entity
tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_write_features
[gw12] [ 87%] PASSED tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_write_features
tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_search_features
[gw12] [ 87%] PASSED tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_search_features
tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_batch_serve_to_df
[gw12] [ 87%] PASSED tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_batch_serve_to_df
tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_batch_serve_to_gcs
[gw12] [ 88%] PASSED tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_batch_serve_to_gcs
tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_batch_serve_to_bq
[gw12] [ 88%] PASSED tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_batch_serve_to_bq
tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_online_reads
[gw12] [ 89%] PASSED tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_online_reads
[gw2] [ 89%] PASSED tests/system/aiplatform/test_model_evaluation.py::TestModelEvaluationJob::test_model_evaluate_custom_tabular_model
[gw13] [ 89%] FAILED tests/system/vertex_ray/test_ray_data.py::TestRayData::test_ray_data[2.9]
tests/system/vertex_ray/test_ray_data.py::TestRayData::test_ray_data[2.33]
[gw13] [ 90%] FAILED tests/system/vertex_ray/test_ray_data.py::TestRayData::test_ray_data[2.33]
[gw0] [ 90%] FAILED tests/system/vertex_ray/test_cluster_management.py::TestClusterManagement::test_cluster_management[2.33]
tests/system/vertex_ray/test_job_submission_dashboard.py::TestJobSubmissionDashboard::test_job_submission_dashboard[2.9]
[gw0] [ 91%] FAILED tests/system/vertex_ray/test_job_submission_dashboard.py::TestJobSubmissionDashboard::test_job_submission_dashboard[2.9]
tests/system/vertex_ray/test_job_submission_dashboard.py::TestJobSubmissionDashboard::test_job_submission_dashboard[2.33]
[gw0] [ 91%] FAILED tests/system/vertex_ray/test_job_submission_dashboard.py::TestJobSubmissionDashboard::test_job_submission_dashboard[2.33]
[gw15] [ 91%] PASSED tests/system/aiplatform/test_matching_engine_index.py::TestMatchingEngine::test_create_get_list_matching_engine_index
tests/system/aiplatform/test_matching_engine_index.py::TestMatchingEngine::test_matching_engine_stream_index
[gw5] [ 92%] PASSED tests/system/aiplatform/test_e2e_forecasting.py::TestEndToEndForecasting2::test_end_to_end_forecasting[SequenceToSequencePlusForecastingTrainingJob]
tests/system/aiplatform/test_model_monitoring.py::TestModelDeploymentMonitoring::test_create_endpoint
[gw4] [ 92%] PASSED tests/system/aiplatform/test_e2e_forecasting.py::TestEndToEndForecasting1::test_end_to_end_forecasting[AutoMLForecastingTrainingJob]
tests/system/aiplatform/test_model_interactions.py::TestModelInteractions::test_prediction
[gw4] [ 93%] PASSED tests/system/aiplatform/test_model_interactions.py::TestModelInteractions::test_prediction
tests/system/aiplatform/test_model_interactions.py::TestModelInteractions::test_endpoint_predict_async
[gw4] [ 93%] PASSED tests/system/aiplatform/test_model_interactions.py::TestModelInteractions::test_endpoint_predict_async
[gw7] [ 93%] PASSED tests/system/aiplatform/test_e2e_forecasting.py::TestEndToEndForecasting4::test_end_to_end_forecasting[TimeSeriesDenseEncoderForecastingTrainingJob]
tests/system/aiplatform/test_model_version_management.py::TestVersionManagement::test_upload_deploy_manage_versioned_model
[gw7] [ 94%] PASSED tests/system/aiplatform/test_model_version_management.py::TestVersionManagement::test_upload_deploy_manage_versioned_model
[gw15] [ 94%] PASSED tests/system/aiplatform/test_matching_engine_index.py::TestMatchingEngine::test_matching_engine_stream_index
tests/system/aiplatform/test_pipeline_job_schedule.py::TestPipelineJobSchedule::test_create_get_pause_resume_update_list
[gw6] [ 95%] PASSED tests/system/aiplatform/test_e2e_forecasting.py::TestEndToEndForecasting3::test_end_to_end_forecasting[TemporalFusionTransformerForecastingTrainingJob]
tests/system/aiplatform/test_model_upload.py::TestModelUploadAndUpdate::test_upload_and_deploy_xgboost_model
[gw9] [ 95%] PASSED tests/system/aiplatform/test_e2e_tabular.py::TestEndToEndTabular::test_end_to_end_tabular
tests/system/aiplatform/test_persistent_resource.py::TestPersistentResource::test_create_persistent_resource
[gw9] [ 95%] FAILED tests/system/aiplatform/test_persistent_resource.py::TestPersistentResource::test_create_persistent_resource
[gw5] [ 96%] PASSED tests/system/aiplatform/test_model_monitoring.py::TestModelDeploymentMonitoring::test_create_endpoint
tests/system/aiplatform/test_model_monitoring.py::TestModelDeploymentMonitoring::test_mdm_two_models_one_valid_config
[gw5] [ 96%] FAILED tests/system/aiplatform/test_model_monitoring.py::TestModelDeploymentMonitoring::test_mdm_two_models_one_valid_config
tests/system/aiplatform/test_model_monitoring.py::TestModelDeploymentMonitoring::test_mdm_pause_and_update_config
[gw5] [ 97%] SKIPPED tests/system/aiplatform/test_model_monitoring.py::TestModelDeploymentMonitoring::test_mdm_pause_and_update_config
tests/system/aiplatform/test_model_monitoring.py::TestModelDeploymentMonitoring::test_mdm_two_models_two_valid_configs
[gw5] [ 97%] FAILED tests/system/aiplatform/test_model_monitoring.py::TestModelDeploymentMonitoring::test_mdm_two_models_two_valid_configs
tests/system/aiplatform/test_model_monitoring.py::TestModelDeploymentMonitoring::test_mdm_invalid_config_incorrect_model_id
[gw5] [ 97%] PASSED tests/system/aiplatform/test_model_monitoring.py::TestModelDeploymentMonitoring::test_mdm_invalid_config_incorrect_model_id
tests/system/aiplatform/test_model_monitoring.py::TestModelDeploymentMonitoring::test_mdm_invalid_config_xai
[gw5] [ 98%] PASSED tests/system/aiplatform/test_model_monitoring.py::TestModelDeploymentMonitoring::test_mdm_invalid_config_xai
tests/system/aiplatform/test_model_monitoring.py::TestModelDeploymentMonitoring::test_mdm_two_models_invalid_configs_xai
[gw5] [ 98%] PASSED tests/system/aiplatform/test_model_monitoring.py::TestModelDeploymentMonitoring::test_mdm_two_models_invalid_configs_xai
tests/system/aiplatform/test_model_monitoring.py::TestModelDeploymentMonitoring::test_mdm_notification_channel_alert_config
[gw5] [ 99%] FAILED tests/system/aiplatform/test_model_monitoring.py::TestModelDeploymentMonitoring::test_mdm_notification_channel_alert_config
[gw15] [ 99%] PASSED tests/system/aiplatform/test_pipeline_job_schedule.py::TestPipelineJobSchedule::test_create_get_pause_resume_update_list
[gw6] [100%] PASSED tests/system/aiplatform/test_model_upload.py::TestModelUploadAndUpdate::test_upload_and_deploy_xgboost_model
=================================== FAILURES ===================================
___________ TestExperimentModel.test_xgboost_booster_with_custom_uri ___________
[gw10] linux -- Python 3.10.15 /tmpfs/src/github/python-aiplatform/.nox/system-3-10/bin/python
args = (parent: "projects/ucaip-sample-tests/locations/us-central1/metadataStores/default"
artifact {
uri: "gs://test-verte... key: "frameworkName"
value {
string_value: "xgboost"
}
}
}
}
artifact_id: "xgb-booster"
,)
kwargs = {'metadata': [('x-goog-request-params', 'parent=projects/ucaip-sample-tests/locations/us-central1/metadataStores/defau...pc/1.51.3 gax/2.21.0 gapic/1.82.0+top_google_constructor_method+google.cloud.aiplatform.metadata._models.save_model')]}
@functools.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
try:
> return callable_(*args, **kwargs)
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/grpc_helpers.py:76:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
.nox/system-3-10/lib/python3.10/site-packages/grpc/_interceptor.py:247: in __call__
response, ignored_call = self._with_call(request,
.nox/system-3-10/lib/python3.10/site-packages/grpc/_interceptor.py:290: in _with_call
return call.result(), call
.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:343: in result
raise self
.nox/system-3-10/lib/python3.10/site-packages/grpc/_interceptor.py:274: in continuation
response, call = self._thunk(new_method).with_call(
.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:957: in with_call
return _end_unary_response_blocking(state, call, True, None)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
state =
call =
with_call = True, deadline = None
def _end_unary_response_blocking(state, call, with_call, deadline):
if state.code is grpc.StatusCode.OK:
if with_call:
rendezvous = _MultiThreadedRendezvous(state, call, None, deadline)
return state.response, rendezvous
else:
return state.response
else:
> raise _InactiveRpcError(state)
E grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
E status = StatusCode.ALREADY_EXISTS
E details = "Artifact with name projects/580378083368/locations/us-central1/metadataStores/default/artifacts/xgb-booster already exists"
E debug_error_string = "UNKNOWN:Error received from peer ipv4:142.251.188.95:443 {created_time:"2025-02-27T23:35:46.393471566+00:00", grpc_status:6, grpc_message:"Artifact with name projects/580378083368/locations/us-central1/metadataStores/default/artifacts/xgb-booster already exists"}"
E >
.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:849: _InactiveRpcError
The above exception was the direct cause of the following exception:
self =
shared_state = {'bucket': , 'resources': [}
def test_xgboost_booster_with_custom_uri(self, shared_state):
aiplatform.init(
project=e2e_base._PROJECT,
location=e2e_base._LOCATION,
staging_bucket=f"gs://{shared_state['staging_bucket_name']}",
)
train_x = np.array([[1, 1], [1, 2], [2, 2], [2, 3]])
train_y = np.array([1, 1, 0, 0])
dtrain = xgb.DMatrix(data=train_x, label=train_y)
booster = xgb.train(
params={"num_parallel_tree": 4, "subsample": 0.5, "num_class": 2},
dtrain=dtrain,
)
# Test save xgboost booster model with custom uri
uri = f"gs://{shared_state['staging_bucket_name']}/custom-uri"
> aiplatform.save_model(
model=booster,
artifact_id="xgb-booster",
uri=uri,
)
tests/system/aiplatform/test_experiment_model.py:112:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
google/cloud/aiplatform/metadata/_models.py:530: in save_model
model_artifact.create(
google/cloud/aiplatform/metadata/schema/base_artifact.py:186: in create
new_artifact_instance = artifact.Artifact.create(
google/cloud/aiplatform/metadata/artifact.py:354: in create
return cls._create(
google/cloud/aiplatform/metadata/artifact.py:204: in _create
resource = cls._create_resource(
google/cloud/aiplatform/metadata/artifact.py:113: in _create_resource
return client.create_artifact(
google/cloud/aiplatform_v1/services/metadata_service/client.py:1504: in create_artifact
response = rpc(
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/gapic_v1/method.py:131: in __call__
return wrapped_func(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
args = (parent: "projects/ucaip-sample-tests/locations/us-central1/metadataStores/default"
artifact {
uri: "gs://test-verte... key: "frameworkName"
value {
string_value: "xgboost"
}
}
}
}
artifact_id: "xgb-booster"
,)
kwargs = {'metadata': [('x-goog-request-params', 'parent=projects/ucaip-sample-tests/locations/us-central1/metadataStores/defau...pc/1.51.3 gax/2.21.0 gapic/1.82.0+top_google_constructor_method+google.cloud.aiplatform.metadata._models.save_model')]}
@functools.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
try:
return callable_(*args, **kwargs)
except grpc.RpcError as exc:
> raise exceptions.from_grpc_error(exc) from exc
E google.api_core.exceptions.AlreadyExists: 409 Artifact with name projects/580378083368/locations/us-central1/metadataStores/default/artifacts/xgb-booster already exists
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/grpc_helpers.py:78: AlreadyExists
_________ TestExperimentModel.test_xgboost_xgbmodel_with_custom_names __________
[gw10] linux -- Python 3.10.15 /tmpfs/src/github/python-aiplatform/.nox/system-3-10/bin/python
args = (parent: "projects/ucaip-sample-tests/locations/us-central1/metadataStores/default"
artifact {
display_name: "custom...y: "frameworkName"
value {
string_value: "xgboost"
}
}
}
}
artifact_id: "xgboost-xgbmodel"
,)
kwargs = {'metadata': [('x-goog-request-params', 'parent=projects/ucaip-sample-tests/locations/us-central1/metadataStores/defau...pc/1.51.3 gax/2.21.0 gapic/1.82.0+top_google_constructor_method+google.cloud.aiplatform.metadata._models.save_model')]}
@functools.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
try:
> return callable_(*args, **kwargs)
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/grpc_helpers.py:76:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
.nox/system-3-10/lib/python3.10/site-packages/grpc/_interceptor.py:247: in __call__
response, ignored_call = self._with_call(request,
.nox/system-3-10/lib/python3.10/site-packages/grpc/_interceptor.py:290: in _with_call
return call.result(), call
.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:343: in result
raise self
.nox/system-3-10/lib/python3.10/site-packages/grpc/_interceptor.py:274: in continuation
response, call = self._thunk(new_method).with_call(
.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:957: in with_call
return _end_unary_response_blocking(state, call, True, None)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
state =
call =
with_call = True, deadline = None
def _end_unary_response_blocking(state, call, with_call, deadline):
if state.code is grpc.StatusCode.OK:
if with_call:
rendezvous = _MultiThreadedRendezvous(state, call, None, deadline)
return state.response, rendezvous
else:
return state.response
else:
> raise _InactiveRpcError(state)
E grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
E status = StatusCode.ALREADY_EXISTS
E details = "Artifact with name projects/580378083368/locations/us-central1/metadataStores/default/artifacts/xgboost-xgbmodel already exists"
E debug_error_string = "UNKNOWN:Error received from peer ipv4:142.251.188.95:443 {grpc_message:"Artifact with name projects/580378083368/locations/us-central1/metadataStores/default/artifacts/xgboost-xgbmodel already exists", grpc_status:6, created_time:"2025-02-27T23:35:48.237873303+00:00"}"
E >
.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:849: _InactiveRpcError
The above exception was the direct cause of the following exception:
self =
shared_state = {'bucket': , 'resources': [}
def test_xgboost_xgbmodel_with_custom_names(self, shared_state):
aiplatform.init(
project=e2e_base._PROJECT,
location=e2e_base._LOCATION,
staging_bucket=f"gs://{shared_state['staging_bucket_name']}",
)
train_x = np.array([[1, 1], [1, 2], [2, 2], [2, 3]])
train_y = np.array([1, 1, 0, 0])
xgb_model = xgb.XGBClassifier()
xgb_model.fit(train_x, train_y)
# Test save xgboost xgbmodel with custom display_name
> aiplatform.save_model(
model=xgb_model,
artifact_id="xgboost-xgbmodel",
display_name="custom-experiment-model-name",
)
tests/system/aiplatform/test_experiment_model.py:165:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
google/cloud/aiplatform/metadata/_models.py:530: in save_model
model_artifact.create(
google/cloud/aiplatform/metadata/schema/base_artifact.py:186: in create
new_artifact_instance = artifact.Artifact.create(
google/cloud/aiplatform/metadata/artifact.py:354: in create
return cls._create(
google/cloud/aiplatform/metadata/artifact.py:204: in _create
resource = cls._create_resource(
google/cloud/aiplatform/metadata/artifact.py:113: in _create_resource
return client.create_artifact(
google/cloud/aiplatform_v1/services/metadata_service/client.py:1504: in create_artifact
response = rpc(
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/gapic_v1/method.py:131: in __call__
return wrapped_func(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
args = (parent: "projects/ucaip-sample-tests/locations/us-central1/metadataStores/default"
artifact {
display_name: "custom...y: "frameworkName"
value {
string_value: "xgboost"
}
}
}
}
artifact_id: "xgboost-xgbmodel"
,)
kwargs = {'metadata': [('x-goog-request-params', 'parent=projects/ucaip-sample-tests/locations/us-central1/metadataStores/defau...pc/1.51.3 gax/2.21.0 gapic/1.82.0+top_google_constructor_method+google.cloud.aiplatform.metadata._models.save_model')]}
@functools.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
try:
return callable_(*args, **kwargs)
except grpc.RpcError as exc:
> raise exceptions.from_grpc_error(exc) from exc
E google.api_core.exceptions.AlreadyExists: 409 Artifact with name projects/580378083368/locations/us-central1/metadataStores/default/artifacts/xgboost-xgbmodel already exists
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/grpc_helpers.py:78: AlreadyExists
____________ TestAutologging.test_autologging_with_autorun_creation ____________
[gw0] linux -- Python 3.10.15 /tmpfs/src/github/python-aiplatform/.nox/system-3-10/bin/python
self = Index(['experiment_name', 'run_name', 'run_type', 'state', 'param.copy_X',
'param.fit_intercept', 'param.positi...an_squared_error', 'metric.training_r2_score',
'metric.training_root_mean_squared_error'],
dtype='object')
key = 'metric.training_mae'
def get_loc(self, key):
"""
Get integer location, slice or boolean mask for requested label.
Parameters
----------
key : label
Returns
-------
int if unique index, slice if monotonic index, else mask
Examples
--------
>>> unique_index = pd.Index(list('abc'))
>>> unique_index.get_loc('b')
1
>>> monotonic_index = pd.Index(list('abbc'))
>>> monotonic_index.get_loc('b')
slice(1, 3, None)
>>> non_monotonic_index = pd.Index(list('abcb'))
>>> non_monotonic_index.get_loc('b')
array([False, True, False, True])
"""
casted_key = self._maybe_cast_indexer(key)
try:
> return self._engine.get_loc(casted_key)
.nox/system-3-10/lib/python3.10/site-packages/pandas/core/indexes/base.py:3805:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
index.pyx:167: in pandas._libs.index.IndexEngine.get_loc
???
index.pyx:196: in pandas._libs.index.IndexEngine.get_loc
???
pandas/_libs/hashtable_class_helper.pxi:7081: in pandas._libs.hashtable.PyObjectHashTable.get_item
???
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
> ???
E KeyError: 'metric.training_mae'
pandas/_libs/hashtable_class_helper.pxi:7089: KeyError
The above exception was the direct cause of the following exception:
self =
shared_state = {'bucket': , 'resources': [}
def test_autologging_with_autorun_creation(self, shared_state):
aiplatform.init(
project=e2e_base._PROJECT,
location=e2e_base._LOCATION,
experiment=self._experiment_autocreate_scikit,
experiment_tensorboard=self._backing_tensorboard,
)
shared_state["resources"] = [self._backing_tensorboard]
shared_state["resources"].append(
aiplatform.metadata.metadata._experiment_tracker.experiment
)
aiplatform.autolog()
build_and_train_test_scikit_model()
# Confirm sklearn run, params, and metrics exist
experiment_df_scikit = aiplatform.get_experiment_df()
assert experiment_df_scikit["run_name"][0].startswith("sklearn-")
assert experiment_df_scikit["param.fit_intercept"][0] == "True"
> assert experiment_df_scikit["metric.training_mae"][0] > 0
tests/system/aiplatform/test_autologging.py:162:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
.nox/system-3-10/lib/python3.10/site-packages/pandas/core/frame.py:4102: in __getitem__
indexer = self.columns.get_loc(key)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = Index(['experiment_name', 'run_name', 'run_type', 'state', 'param.copy_X',
'param.fit_intercept', 'param.positi...an_squared_error', 'metric.training_r2_score',
'metric.training_root_mean_squared_error'],
dtype='object')
key = 'metric.training_mae'
def get_loc(self, key):
"""
Get integer location, slice or boolean mask for requested label.
Parameters
----------
key : label
Returns
-------
int if unique index, slice if monotonic index, else mask
Examples
--------
>>> unique_index = pd.Index(list('abc'))
>>> unique_index.get_loc('b')
1
>>> monotonic_index = pd.Index(list('abbc'))
>>> monotonic_index.get_loc('b')
slice(1, 3, None)
>>> non_monotonic_index = pd.Index(list('abcb'))
>>> non_monotonic_index.get_loc('b')
array([False, True, False, True])
"""
casted_key = self._maybe_cast_indexer(key)
try:
return self._engine.get_loc(casted_key)
except KeyError as err:
if isinstance(casted_key, slice) or (
isinstance(casted_key, abc.Iterable)
and any(isinstance(x, slice) for x in casted_key)
):
raise InvalidIndexError(key)
> raise KeyError(key) from err
E KeyError: 'metric.training_mae'
.nox/system-3-10/lib/python3.10/site-packages/pandas/core/indexes/base.py:3812: KeyError
------------------------------ Captured log setup ------------------------------
INFO google.cloud.aiplatform.tensorboard.tensorboard_resource:base.py:85 Creating Tensorboard
INFO google.cloud.aiplatform.tensorboard.tensorboard_resource:base.py:88 Create Tensorboard backing LRO: projects/580378083368/locations/us-central1/tensorboards/5394023725962100736/operations/1965830136519458816
INFO google.cloud.aiplatform.tensorboard.tensorboard_resource:base.py:113 Tensorboard created. Resource name: projects/580378083368/locations/us-central1/tensorboards/5394023725962100736
INFO google.cloud.aiplatform.tensorboard.tensorboard_resource:base.py:114 To use this Tensorboard in another session:
INFO google.cloud.aiplatform.tensorboard.tensorboard_resource:base.py:115 tb = aiplatform.Tensorboard('projects/580378083368/locations/us-central1/tensorboards/5394023725962100736')
----------------------------- Captured stdout call -----------------------------
------------------------------ Captured log call -------------------------------
INFO google.cloud.aiplatform.metadata.experiment_resources:experiment_resources.py:797 Associating projects/580378083368/locations/us-central1/metadataStores/default/contexts/tmpvrtxsdk-e2e--451794e1-4b8f-4f12-8d8a-960e94d5d7b1-sklearn-2025-02-27-23-35-41-86f72 to Experiment: tmpvrtxsdk-e2e--451794e1-4b8f-4f12-8d8a-960e94d5d7b1
______ TestExperimentModel.test_tensorflow_keras_model_with_input_example ______
[gw10] linux -- Python 3.10.15 /tmpfs/src/github/python-aiplatform/.nox/system-3-10/bin/python
args = (parent: "projects/ucaip-sample-tests/locations/us-central1/metadataStores/default"
artifact {
uri: "gs://test-verte...key: "frameworkName"
value {
string_value: "tensorflow"
}
}
}
}
artifact_id: "keras-model"
,)
kwargs = {'metadata': [('x-goog-request-params', 'parent=projects/ucaip-sample-tests/locations/us-central1/metadataStores/defau...pc/1.51.3 gax/2.21.0 gapic/1.82.0+top_google_constructor_method+google.cloud.aiplatform.metadata._models.save_model')]}
@functools.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
try:
> return callable_(*args, **kwargs)
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/grpc_helpers.py:76:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
.nox/system-3-10/lib/python3.10/site-packages/grpc/_interceptor.py:247: in __call__
response, ignored_call = self._with_call(request,
.nox/system-3-10/lib/python3.10/site-packages/grpc/_interceptor.py:290: in _with_call
return call.result(), call
.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:343: in result
raise self
.nox/system-3-10/lib/python3.10/site-packages/grpc/_interceptor.py:274: in continuation
response, call = self._thunk(new_method).with_call(
.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:957: in with_call
return _end_unary_response_blocking(state, call, True, None)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
state =
call =
with_call = True, deadline = None
def _end_unary_response_blocking(state, call, with_call, deadline):
if state.code is grpc.StatusCode.OK:
if with_call:
rendezvous = _MultiThreadedRendezvous(state, call, None, deadline)
return state.response, rendezvous
else:
return state.response
else:
> raise _InactiveRpcError(state)
E grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
E status = StatusCode.ALREADY_EXISTS
E details = "Artifact with name projects/580378083368/locations/us-central1/metadataStores/default/artifacts/keras-model already exists"
E debug_error_string = "UNKNOWN:Error received from peer ipv4:142.251.188.95:443 {created_time:"2025-02-27T23:35:58.388488812+00:00", grpc_status:6, grpc_message:"Artifact with name projects/580378083368/locations/us-central1/metadataStores/default/artifacts/keras-model already exists"}"
E >
.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:849: _InactiveRpcError
The above exception was the direct cause of the following exception:
self =
shared_state = {'bucket': , 'resources': [}
def test_tensorflow_keras_model_with_input_example(self, shared_state):
aiplatform.init(
project=e2e_base._PROJECT,
location=e2e_base._LOCATION,
staging_bucket=f"gs://{shared_state['staging_bucket_name']}",
)
train_x = np.random.random((100, 2))
train_y = np.random.random((100, 1))
model = tf.keras.Sequential(
[tf.keras.layers.Dense(5, input_shape=(2,)), tf.keras.layers.Softmax()]
)
model.compile(optimizer="adam", loss="mean_squared_error")
model.fit(train_x, train_y)
# Test save tf.keras model with input example
> aiplatform.save_model(
model=model,
artifact_id="keras-model",
input_example=train_x,
)
tests/system/aiplatform/test_experiment_model.py:225:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
google/cloud/aiplatform/metadata/_models.py:530: in save_model
model_artifact.create(
google/cloud/aiplatform/metadata/schema/base_artifact.py:186: in create
new_artifact_instance = artifact.Artifact.create(
google/cloud/aiplatform/metadata/artifact.py:354: in create
return cls._create(
google/cloud/aiplatform/metadata/artifact.py:204: in _create
resource = cls._create_resource(
google/cloud/aiplatform/metadata/artifact.py:113: in _create_resource
return client.create_artifact(
google/cloud/aiplatform_v1/services/metadata_service/client.py:1504: in create_artifact
response = rpc(
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/gapic_v1/method.py:131: in __call__
return wrapped_func(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
args = (parent: "projects/ucaip-sample-tests/locations/us-central1/metadataStores/default"
artifact {
uri: "gs://test-verte...key: "frameworkName"
value {
string_value: "tensorflow"
}
}
}
}
artifact_id: "keras-model"
,)
kwargs = {'metadata': [('x-goog-request-params', 'parent=projects/ucaip-sample-tests/locations/us-central1/metadataStores/defau...pc/1.51.3 gax/2.21.0 gapic/1.82.0+top_google_constructor_method+google.cloud.aiplatform.metadata._models.save_model')]}
@functools.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
try:
return callable_(*args, **kwargs)
except grpc.RpcError as exc:
> raise exceptions.from_grpc_error(exc) from exc
E google.api_core.exceptions.AlreadyExists: 409 Artifact with name projects/580378083368/locations/us-central1/metadataStores/default/artifacts/keras-model already exists
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/grpc_helpers.py:78: AlreadyExists
----------------------------- Captured stdout call -----------------------------
1/4 [======>.......................] - ETA: 2s - loss: 0.13644/4 [==============================] - 1s 3ms/step - loss: 0.1722
________ TestExperimentModel.test_tensorflow_module_with_gpu_container _________
[gw10] linux -- Python 3.10.15 /tmpfs/src/github/python-aiplatform/.nox/system-3-10/bin/python
args = (parent: "projects/ucaip-sample-tests/locations/us-central1/metadataStores/default"
artifact {
uri: "gs://test-verte... key: "frameworkName"
value {
string_value: "tensorflow"
}
}
}
}
artifact_id: "tf-module"
,)
kwargs = {'metadata': [('x-goog-request-params', 'parent=projects/ucaip-sample-tests/locations/us-central1/metadataStores/defau...pc/1.51.3 gax/2.21.0 gapic/1.82.0+top_google_constructor_method+google.cloud.aiplatform.metadata._models.save_model')]}
@functools.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
try:
> return callable_(*args, **kwargs)
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/grpc_helpers.py:76:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
.nox/system-3-10/lib/python3.10/site-packages/grpc/_interceptor.py:247: in __call__
response, ignored_call = self._with_call(request,
.nox/system-3-10/lib/python3.10/site-packages/grpc/_interceptor.py:290: in _with_call
return call.result(), call
.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:343: in result
raise self
.nox/system-3-10/lib/python3.10/site-packages/grpc/_interceptor.py:274: in continuation
response, call = self._thunk(new_method).with_call(
.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:957: in with_call
return _end_unary_response_blocking(state, call, True, None)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
state =
call =
with_call = True, deadline = None
def _end_unary_response_blocking(state, call, with_call, deadline):
if state.code is grpc.StatusCode.OK:
if with_call:
rendezvous = _MultiThreadedRendezvous(state, call, None, deadline)
return state.response, rendezvous
else:
return state.response
else:
> raise _InactiveRpcError(state)
E grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
E status = StatusCode.ALREADY_EXISTS
E details = "Artifact with name projects/580378083368/locations/us-central1/metadataStores/default/artifacts/tf-module already exists"
E debug_error_string = "UNKNOWN:Error received from peer ipv4:108.177.98.95:443 {grpc_message:"Artifact with name projects/580378083368/locations/us-central1/metadataStores/default/artifacts/tf-module already exists", grpc_status:6, created_time:"2025-02-27T23:36:07.159474185+00:00"}"
E >
.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:849: _InactiveRpcError
The above exception was the direct cause of the following exception:
self =
shared_state = {'bucket': , 'resources': [}
def test_tensorflow_module_with_gpu_container(self, shared_state):
aiplatform.init(
project=e2e_base._PROJECT,
location=e2e_base._LOCATION,
staging_bucket=f"gs://{shared_state['staging_bucket_name']}",
)
class Adder(tf.Module):
@tf.function(
input_signature=[
tf.TensorSpec(
shape=[
2,
],
dtype=tf.float32,
)
]
)
def add(self, x):
return x + x
model = Adder()
# Test save tf.Module model
> aiplatform.save_model(model, "tf-module")
tests/system/aiplatform/test_experiment_model.py:293:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
google/cloud/aiplatform/metadata/_models.py:530: in save_model
model_artifact.create(
google/cloud/aiplatform/metadata/schema/base_artifact.py:186: in create
new_artifact_instance = artifact.Artifact.create(
google/cloud/aiplatform/metadata/artifact.py:354: in create
return cls._create(
google/cloud/aiplatform/metadata/artifact.py:204: in _create
resource = cls._create_resource(
google/cloud/aiplatform/metadata/artifact.py:113: in _create_resource
return client.create_artifact(
google/cloud/aiplatform_v1/services/metadata_service/client.py:1504: in create_artifact
response = rpc(
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/gapic_v1/method.py:131: in __call__
return wrapped_func(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
args = (parent: "projects/ucaip-sample-tests/locations/us-central1/metadataStores/default"
artifact {
uri: "gs://test-verte... key: "frameworkName"
value {
string_value: "tensorflow"
}
}
}
}
artifact_id: "tf-module"
,)
kwargs = {'metadata': [('x-goog-request-params', 'parent=projects/ucaip-sample-tests/locations/us-central1/metadataStores/defau...pc/1.51.3 gax/2.21.0 gapic/1.82.0+top_google_constructor_method+google.cloud.aiplatform.metadata._models.save_model')]}
@functools.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
try:
return callable_(*args, **kwargs)
except grpc.RpcError as exc:
> raise exceptions.from_grpc_error(exc) from exc
E google.api_core.exceptions.AlreadyExists: 409 Artifact with name projects/580378083368/locations/us-central1/metadataStores/default/artifacts/tf-module already exists
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/grpc_helpers.py:78: AlreadyExists
___________ TestPredictionCpr.test_build_cpr_model_upload_and_deploy ___________
[gw13] linux -- Python 3.10.15 /tmpfs/src/github/python-aiplatform/.nox/system-3-10/bin/python
self =
shared_state = {}
caplog = <_pytest.logging.LogCaptureFixture object at 0x7f8e52c0aaa0>
def test_build_cpr_model_upload_and_deploy(self, shared_state, caplog):
"""Creates a CPR model from custom predictor, uploads it and deploys."""
caplog.set_level(logging.INFO)
aiplatform.init(project=e2e_base._PROJECT, location=e2e_base._LOCATION)
local_model = LocalModel.build_cpr_model(
_USER_CODE_DIR,
_IMAGE_URI,
predictor=SklearnPredictor,
requirements_path=os.path.join(_USER_CODE_DIR, _REQUIREMENTS_FILE),
)
with local_model.deploy_to_local_endpoint(
artifact_uri=_LOCAL_MODEL_DIR,
) as local_endpoint:
local_predict_response = local_endpoint.predict(
request=f'{{"instances": {_PREDICTION_INPUT}}}',
headers={"Content-Type": "application/json"},
)
assert len(json.loads(local_predict_response.content)["predictions"]) == 1
interactive_local_endpoint = local_model.deploy_to_local_endpoint(
artifact_uri=_LOCAL_MODEL_DIR,
)
interactive_local_endpoint.serve()
interactive_local_predict_response = interactive_local_endpoint.predict(
request=f'{{"instances": {_PREDICTION_INPUT}}}',
headers={"Content-Type": "application/json"},
)
interactive_local_endpoint.stop()
assert (
len(json.loads(interactive_local_predict_response.content)["predictions"])
== 1
)
# Configure docker.
logging.info(
subprocess.run(["gcloud", "auth", "configure-docker"], capture_output=True)
)
> local_model.push_image()
tests/system/aiplatform/test_prediction_cpr.py:94:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
google/cloud/aiplatform/prediction/local_model.py:612: in push_image
errors.raise_docker_error_with_command(command, return_code)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
command = ['docker', 'push', 'gcr.io/ucaip-sample-tests/prediction-cpr/sklearn:20250227_233526']
return_code = 1
def raise_docker_error_with_command(command: List[str], return_code: int) -> NoReturn:
"""Raises DockerError with the given command and return code.
Args:
command (List(str)):
Required. The docker command that fails.
return_code (int):
Required. The return code from the command.
Raises:
DockerError which error message populated by the given command and return code.
"""
error_msg = textwrap.dedent(
"""
Docker failed with error code {code}.
Command: {cmd}
""".format(
code=return_code, cmd=" ".join(command)
)
)
> raise DockerError(error_msg, command, return_code)
E google.cloud.aiplatform.docker_utils.errors.DockerError: ('\nDocker failed with error code 1.\nCommand: docker push gcr.io/ucaip-sample-tests/prediction-cpr/sklearn:20250227_233526\n', ['docker', 'push', 'gcr.io/ucaip-sample-tests/prediction-cpr/sklearn:20250227_233526'], 1)
google/cloud/aiplatform/docker_utils/errors.py:60: DockerError
------------------------------ Captured log call -------------------------------
INFO google.cloud.aiplatform.docker_utils.build:build.py:531 Running command: docker build -t gcr.io/ucaip-sample-tests/prediction-cpr/sklearn:20250227_233526 --rm -f- /tmpfs/src/github/python-aiplatform/tests/system/aiplatform/test_resources/cpr_user_code
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Install the buildx component to build images with BuildKit:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 https://docs.docker.com/go/buildx/
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Sending build context to Docker daemon 11.31kB
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Step 1/14 : FROM python:3.10
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 3.10: Pulling from library/python
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 155ad54a8b28: Pulling fs layer
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 8031108f3cda: Pulling fs layer
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 1d281e50d3e4: Pulling fs layer
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 447713e77b4f: Pulling fs layer
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 a6c2fd51c72c: Pulling fs layer
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 2268f82e627e: Pulling fs layer
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 7fda9d093afe: Pulling fs layer
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 447713e77b4f: Waiting
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 a6c2fd51c72c: Waiting
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 2268f82e627e: Waiting
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 7fda9d093afe: Waiting
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 8031108f3cda: Verifying Checksum
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 8031108f3cda: Download complete
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 155ad54a8b28: Verifying Checksum
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 155ad54a8b28: Download complete
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 1d281e50d3e4: Download complete
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 a6c2fd51c72c: Verifying Checksum
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 a6c2fd51c72c: Download complete
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 2268f82e627e: Verifying Checksum
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 2268f82e627e: Download complete
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 7fda9d093afe: Verifying Checksum
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 7fda9d093afe: Download complete
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 447713e77b4f: Verifying Checksum
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 447713e77b4f: Download complete
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 155ad54a8b28: Pull complete
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 8031108f3cda: Pull complete
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 1d281e50d3e4: Pull complete
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 447713e77b4f: Pull complete
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 a6c2fd51c72c: Pull complete
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 2268f82e627e: Pull complete
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 7fda9d093afe: Pull complete
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Digest: sha256:e70cd7b54564482c0dee8cd6d8e314450aac59ea0ff669ffa715207ea0e04fa6
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Status: Downloaded newer image for python:3.10
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> e83a01774710
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Step 2/14 : ENV PYTHONDONTWRITEBYTECODE=1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> Running in bd432ce2d4e9
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Removing intermediate container bd432ce2d4e9
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> 24464a65023e
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Step 3/14 : EXPOSE 8080
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> Running in 5e0d47c9c68a
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Removing intermediate container 5e0d47c9c68a
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> eb56ee3a6db2
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Step 4/14 : ENTRYPOINT ["python", "-m", "google.cloud.aiplatform.prediction.model_server"]
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> Running in d63e8d275002
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Removing intermediate container d63e8d275002
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> 1f6b24bd94de
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Step 5/14 : RUN mkdir -m 777 -p /usr/app /home
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> Running in d00ca8d6782b
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Removing intermediate container d00ca8d6782b
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> c6114d64d134
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Step 6/14 : WORKDIR /usr/app
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> Running in ff8e9f247359
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Removing intermediate container ff8e9f247359
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> 977a68b61c9f
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Step 7/14 : ENV HOME=/home
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> Running in f76020a49a58
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Removing intermediate container f76020a49a58
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> 7dfdffe07574
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Step 8/14 : RUN pip install --no-cache-dir --force-reinstall 'google-cloud-aiplatform[prediction]>=1.27.0'
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> Running in 0949a860b436
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting google-cloud-aiplatform[prediction]>=1.27.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading google_cloud_aiplatform-1.82.0-py2.py3-none-any.whl (7.3 MB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 7.3/7.3 MB 29.1 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting google-auth<3.0.0dev,>=2.14.1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading google_auth-2.38.0-py2.py3-none-any.whl (210 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 210.8/210.8 kB 220.0 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting shapely<3.0.0dev
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading shapely-2.0.7-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.5 MB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.5/2.5 MB 77.2 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting docstring-parser<1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading docstring_parser-0.16-py3-none-any.whl (36 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting pydantic<3
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading pydantic-2.10.6-py3-none-any.whl (431 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 431.7/431.7 kB 127.7 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting google-cloud-bigquery!=3.20.0,<4.0.0dev,>=1.15.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading google_cloud_bigquery-3.30.0-py2.py3-none-any.whl (247 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 247.9/247.9 kB 239.3 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting proto-plus<2.0.0dev,>=1.22.3
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading proto_plus-1.26.0-py3-none-any.whl (50 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 50.2/50.2 kB 187.4 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting typing-extensions
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading typing_extensions-4.12.2-py3-none-any.whl (37 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting packaging>=14.3
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading packaging-24.2-py3-none-any.whl (65 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 65.5/65.5 kB 220.7 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting google-cloud-resource-manager<3.0.0dev,>=1.3.3
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading google_cloud_resource_manager-1.14.1-py2.py3-none-any.whl (392 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 392.3/392.3 kB 187.8 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting google-api-core[grpc]!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.4.*,!=2.5.*,!=2.6.*,!=2.7.*,<3.0.0dev,>=1.34.1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading google_api_core-2.24.1-py3-none-any.whl (160 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 160.1/160.1 kB 242.8 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting protobuf!=4.21.0,!=4.21.1,!=4.21.2,!=4.21.3,!=4.21.4,!=4.21.5,<6.0.0dev,>=3.20.2
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading protobuf-5.29.3-cp38-abi3-manylinux2014_x86_64.whl (319 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 319.7/319.7 kB 237.2 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting google-cloud-storage<3.0.0dev,>=1.32.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading google_cloud_storage-2.19.0-py2.py3-none-any.whl (131 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 131.8/131.8 kB 242.5 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting uvicorn[standard]>=0.16.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading uvicorn-0.34.0-py3-none-any.whl (62 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 62.3/62.3 kB 192.8 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting starlette>=0.17.1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading starlette-0.46.0-py3-none-any.whl (71 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 72.0/72.0 kB 207.3 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting httpx<0.25.0,>=0.23.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading httpx-0.24.1-py3-none-any.whl (75 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 75.4/75.4 kB 91.3 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting fastapi<=0.114.0,>=0.71.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading fastapi-0.114.0-py3-none-any.whl (94 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 94.0/94.0 kB 223.9 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting docker>=5.0.3
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading docker-7.1.0-py3-none-any.whl (147 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 147.8/147.8 kB 240.2 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting requests>=2.26.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading requests-2.32.3-py3-none-any.whl (64 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 64.9/64.9 kB 221.7 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting urllib3>=1.26.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading urllib3-2.3.0-py3-none-any.whl (128 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 128.4/128.4 kB 234.9 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting starlette>=0.17.1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading starlette-0.38.6-py3-none-any.whl (71 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 71.5/71.5 kB 223.5 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting googleapis-common-protos<2.0.dev0,>=1.56.2
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading googleapis_common_protos-1.68.0-py2.py3-none-any.whl (164 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 165.0/165.0 kB 245.2 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting grpcio-status<2.0.dev0,>=1.33.2
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading grpcio_status-1.70.0-py3-none-any.whl (14 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting grpcio<2.0dev,>=1.33.2
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading grpcio-1.70.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (5.9 MB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 5.9/5.9 MB 107.8 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting cachetools<6.0,>=2.0.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading cachetools-5.5.2-py3-none-any.whl (10 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting pyasn1-modules>=0.2.1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading pyasn1_modules-0.4.1-py3-none-any.whl (181 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 181.5/181.5 kB 243.4 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting rsa<5,>=3.1.4
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading rsa-4.9-py3-none-any.whl (34 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting python-dateutil<3.0dev,>=2.7.3
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading python_dateutil-2.9.0.post0-py2.py3-none-any.whl (229 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 229.9/229.9 kB 248.7 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting google-cloud-core<3.0.0dev,>=2.4.1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading google_cloud_core-2.4.2-py2.py3-none-any.whl (29 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting google-resumable-media<3.0dev,>=2.0.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading google_resumable_media-2.7.2-py2.py3-none-any.whl (81 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 81.3/81.3 kB 54.4 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting grpc-google-iam-v1<1.0.0dev,>=0.14.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading grpc_google_iam_v1-0.14.0-py2.py3-none-any.whl (27 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting google-crc32c<2.0dev,>=1.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading google_crc32c-1.6.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (37 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting httpcore<0.18.0,>=0.15.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading httpcore-0.17.3-py3-none-any.whl (74 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 74.5/74.5 kB 215.5 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting idna
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading idna-3.10-py3-none-any.whl (70 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 70.4/70.4 kB 224.0 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting sniffio
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading sniffio-1.3.1-py3-none-any.whl (10 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting certifi
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading certifi-2025.1.31-py3-none-any.whl (166 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 166.4/166.4 kB 242.9 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting pydantic-core==2.27.2
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading pydantic_core-2.27.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.0 MB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.0/2.0 MB 150.6 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting annotated-types>=0.6.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading annotated_types-0.7.0-py3-none-any.whl (13 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting numpy<3,>=1.14
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading numpy-2.2.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (16.4 MB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 16.4/16.4 MB 222.6 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting anyio<5,>=3.4.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading anyio-4.8.0-py3-none-any.whl (96 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 96.0/96.0 kB 153.7 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting click>=7.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading click-8.1.8-py3-none-any.whl (98 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 98.2/98.2 kB 232.9 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting h11>=0.8
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading h11-0.14.0-py3-none-any.whl (58 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 58.3/58.3 kB 208.5 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting websockets>=10.4
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading websockets-15.0-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (180 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 180.9/180.9 kB 239.1 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting uvloop!=0.15.0,!=0.15.1,>=0.14.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading uvloop-0.21.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (3.8 MB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.8/3.8 MB 234.1 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting pyyaml>=5.1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading PyYAML-6.0.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (751 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 751.2/751.2 kB 252.3 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting python-dotenv>=0.13
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading python_dotenv-1.0.1-py3-none-any.whl (19 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting watchfiles>=0.13
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading watchfiles-1.0.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (452 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 452.9/452.9 kB 261.4 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting httptools>=0.6.3
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading httptools-0.6.4-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (442 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 442.1/442.1 kB 249.6 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting exceptiongroup>=1.0.2
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading exceptiongroup-1.2.2-py3-none-any.whl (16 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting pyasn1<0.7.0,>=0.4.6
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading pyasn1-0.6.1-py3-none-any.whl (83 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 83.1/83.1 kB 214.6 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting six>=1.5
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading six-1.17.0-py2.py3-none-any.whl (11 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting charset-normalizer<4,>=2
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading charset_normalizer-3.4.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (146 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 146.1/146.1 kB 239.1 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Installing collected packages: websockets, uvloop, urllib3, typing-extensions, sniffio, six, pyyaml, python-dotenv, pyasn1, protobuf, packaging, numpy, idna, httptools, h11, grpcio, google-crc32c, exceptiongroup, docstring-parser, click, charset-normalizer, certifi, cachetools, annotated-types, uvicorn, shapely, rsa, requests, python-dateutil, pydantic-core, pyasn1-modules, proto-plus, googleapis-common-protos, google-resumable-media, anyio, watchfiles, starlette, pydantic, httpcore, grpcio-status, google-auth, docker, httpx, grpc-google-iam-v1, google-api-core, fastapi, google-cloud-core, google-cloud-storage, google-cloud-resource-manager, google-cloud-bigquery, google-cloud-aiplatform
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully installed annotated-types-0.7.0 anyio-4.8.0 cachetools-5.5.2 certifi-2025.1.31 charset-normalizer-3.4.1 click-8.1.8 docker-7.1.0 docstring-parser-0.16 exceptiongroup-1.2.2 fastapi-0.114.0 google-api-core-2.24.1 google-auth-2.38.0 google-cloud-aiplatform-1.82.0 google-cloud-bigquery-3.30.0 google-cloud-core-2.4.2 google-cloud-resource-manager-1.14.1 google-cloud-storage-2.19.0 google-crc32c-1.6.0 google-resumable-media-2.7.2 googleapis-common-protos-1.68.0 grpc-google-iam-v1-0.14.0 grpcio-1.70.0 grpcio-status-1.70.0 h11-0.14.0 httpcore-0.17.3 httptools-0.6.4 httpx-0.24.1 idna-3.10 numpy-2.2.3 packaging-24.2 proto-plus-1.26.0 protobuf-5.29.3 pyasn1-0.6.1 pyasn1-modules-0.4.1 pydantic-2.10.6 pydantic-core-2.27.2 python-dateutil-2.9.0.post0 python-dotenv-1.0.1 pyyaml-6.0.2 requests-2.32.3 rsa-4.9 shapely-2.0.7 six-1.17.0 sniffio-1.3.1 starlette-0.38.6 typing-extensions-4.12.2 urllib3-2.3.0 uvicorn-0.34.0 uvloop-0.21.0 watchfiles-1.0.4 websockets-15.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 [notice] A new release of pip is available: 23.0.1 -> 25.0.1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 [notice] To update, run: pip install --upgrade pip
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Removing intermediate container 0949a860b436
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> 7d2a228f53e7
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Step 9/14 : ENV HANDLER_MODULE=google.cloud.aiplatform.prediction.handler
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> Running in f59da19f59ab
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Removing intermediate container f59da19f59ab
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> 61b779bda9df
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Step 10/14 : ENV HANDLER_CLASS=PredictionHandler
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> Running in b2c8ae991789
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Removing intermediate container b2c8ae991789
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> a7466780df15
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Step 11/14 : ENV PREDICTOR_MODULE=predictor
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> Running in 638ac5d06bd8
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Removing intermediate container 638ac5d06bd8
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> 48d01d5207b8
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Step 12/14 : ENV PREDICTOR_CLASS=SklearnPredictor
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> Running in c603bc918ba6
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Removing intermediate container c603bc918ba6
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> f83a34e49d02
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Step 13/14 : COPY [".", "."]
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> 6ecfd1b36525
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Step 14/14 : RUN pip install --no-cache-dir --force-reinstall -r requirements.txt
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> Running in 5c16d7912b65
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting scikit-learn
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading scikit_learn-1.6.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (13.5 MB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 13.5/13.5 MB 48.9 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting google-cloud-aiplatform[prediction]
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading google_cloud_aiplatform-1.82.0-py2.py3-none-any.whl (7.3 MB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 7.3/7.3 MB 104.0 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting scipy>=1.6.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading scipy-1.15.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (37.6 MB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 37.6/37.6 MB 224.4 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting threadpoolctl>=3.1.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading threadpoolctl-3.5.0-py3-none-any.whl (18 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting joblib>=1.2.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading joblib-1.4.2-py3-none-any.whl (301 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 301.8/301.8 kB 250.8 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting numpy>=1.19.5
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading numpy-2.2.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (16.4 MB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 16.4/16.4 MB 183.6 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting google-auth<3.0.0dev,>=2.14.1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading google_auth-2.38.0-py2.py3-none-any.whl (210 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 210.8/210.8 kB 247.2 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting google-cloud-bigquery!=3.20.0,<4.0.0dev,>=1.15.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading google_cloud_bigquery-3.30.0-py2.py3-none-any.whl (247 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 247.9/247.9 kB 239.2 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting typing-extensions
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading typing_extensions-4.12.2-py3-none-any.whl (37 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting pydantic<3
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading pydantic-2.10.6-py3-none-any.whl (431 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 431.7/431.7 kB 250.8 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting shapely<3.0.0dev
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading shapely-2.0.7-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.5 MB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.5/2.5 MB 206.3 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting proto-plus<2.0.0dev,>=1.22.3
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading proto_plus-1.26.0-py3-none-any.whl (50 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 50.2/50.2 kB 193.0 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting google-cloud-resource-manager<3.0.0dev,>=1.3.3
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading google_cloud_resource_manager-1.14.1-py2.py3-none-any.whl (392 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 392.3/392.3 kB 232.3 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting docstring-parser<1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading docstring_parser-0.16-py3-none-any.whl (36 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting google-cloud-storage<3.0.0dev,>=1.32.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading google_cloud_storage-2.19.0-py2.py3-none-any.whl (131 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 131.8/131.8 kB 233.5 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting protobuf!=4.21.0,!=4.21.1,!=4.21.2,!=4.21.3,!=4.21.4,!=4.21.5,<6.0.0dev,>=3.20.2
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading protobuf-5.29.3-cp38-abi3-manylinux2014_x86_64.whl (319 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 319.7/319.7 kB 252.1 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting packaging>=14.3
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading packaging-24.2-py3-none-any.whl (65 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 65.5/65.5 kB 217.9 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting google-api-core[grpc]!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.4.*,!=2.5.*,!=2.6.*,!=2.7.*,<3.0.0dev,>=1.34.1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading google_api_core-2.24.1-py3-none-any.whl (160 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 160.1/160.1 kB 228.0 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting docker>=5.0.3
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading docker-7.1.0-py3-none-any.whl (147 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 147.8/147.8 kB 244.5 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting fastapi<=0.114.0,>=0.71.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading fastapi-0.114.0-py3-none-any.whl (94 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 94.0/94.0 kB 211.1 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting httpx<0.25.0,>=0.23.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading httpx-0.24.1-py3-none-any.whl (75 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 75.4/75.4 kB 209.7 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting starlette>=0.17.1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading starlette-0.46.0-py3-none-any.whl (71 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 72.0/72.0 kB 217.4 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting uvicorn[standard]>=0.16.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading uvicorn-0.34.0-py3-none-any.whl (62 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 62.3/62.3 kB 207.5 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting urllib3>=1.26.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading urllib3-2.3.0-py3-none-any.whl (128 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 128.4/128.4 kB 210.1 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting requests>=2.26.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading requests-2.32.3-py3-none-any.whl (64 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 64.9/64.9 kB 213.6 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting starlette>=0.17.1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading starlette-0.38.6-py3-none-any.whl (71 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 71.5/71.5 kB 208.3 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting googleapis-common-protos<2.0.dev0,>=1.56.2
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading googleapis_common_protos-1.68.0-py2.py3-none-any.whl (164 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 165.0/165.0 kB 242.3 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting grpcio<2.0dev,>=1.33.2
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading grpcio-1.70.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (5.9 MB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 5.9/5.9 MB 242.4 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting grpcio-status<2.0.dev0,>=1.33.2
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading grpcio_status-1.70.0-py3-none-any.whl (14 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting cachetools<6.0,>=2.0.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading cachetools-5.5.2-py3-none-any.whl (10 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting rsa<5,>=3.1.4
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading rsa-4.9-py3-none-any.whl (34 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting pyasn1-modules>=0.2.1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading pyasn1_modules-0.4.1-py3-none-any.whl (181 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 181.5/181.5 kB 187.1 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting google-cloud-core<3.0.0dev,>=2.4.1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading google_cloud_core-2.4.2-py2.py3-none-any.whl (29 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting google-resumable-media<3.0dev,>=2.0.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading google_resumable_media-2.7.2-py2.py3-none-any.whl (81 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 81.3/81.3 kB 226.2 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting python-dateutil<3.0dev,>=2.7.3
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading python_dateutil-2.9.0.post0-py2.py3-none-any.whl (229 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 229.9/229.9 kB 247.1 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting grpc-google-iam-v1<1.0.0dev,>=0.14.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading grpc_google_iam_v1-0.14.0-py2.py3-none-any.whl (27 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting google-crc32c<2.0dev,>=1.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading google_crc32c-1.6.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (37 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting httpcore<0.18.0,>=0.15.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading httpcore-0.17.3-py3-none-any.whl (74 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 74.5/74.5 kB 206.0 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting certifi
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading certifi-2025.1.31-py3-none-any.whl (166 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 166.4/166.4 kB 248.8 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting idna
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading idna-3.10-py3-none-any.whl (70 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 70.4/70.4 kB 201.9 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting sniffio
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading sniffio-1.3.1-py3-none-any.whl (10 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting annotated-types>=0.6.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading annotated_types-0.7.0-py3-none-any.whl (13 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting pydantic-core==2.27.2
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading pydantic_core-2.27.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.0 MB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.0/2.0 MB 252.0 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting anyio<5,>=3.4.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading anyio-4.8.0-py3-none-any.whl (96 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 96.0/96.0 kB 229.7 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting h11>=0.8
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading h11-0.14.0-py3-none-any.whl (58 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 58.3/58.3 kB 203.4 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting click>=7.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading click-8.1.8-py3-none-any.whl (98 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 98.2/98.2 kB 238.6 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting python-dotenv>=0.13
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading python_dotenv-1.0.1-py3-none-any.whl (19 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting uvloop!=0.15.0,!=0.15.1,>=0.14.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading uvloop-0.21.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (3.8 MB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.8/3.8 MB 185.7 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting httptools>=0.6.3
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading httptools-0.6.4-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (442 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 442.1/442.1 kB 240.0 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting websockets>=10.4
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading websockets-15.0-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (180 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 180.9/180.9 kB 245.6 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting watchfiles>=0.13
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading watchfiles-1.0.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (452 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 452.9/452.9 kB 249.9 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting pyyaml>=5.1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading PyYAML-6.0.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (751 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 751.2/751.2 kB 172.4 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting exceptiongroup>=1.0.2
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading exceptiongroup-1.2.2-py3-none-any.whl (16 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting pyasn1<0.7.0,>=0.4.6
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading pyasn1-0.6.1-py3-none-any.whl (83 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 83.1/83.1 kB 204.8 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting six>=1.5
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading six-1.17.0-py2.py3-none-any.whl (11 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting charset-normalizer<4,>=2
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading charset_normalizer-3.4.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (146 kB)
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 146.1/146.1 kB 232.9 MB/s eta 0:00:00
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Installing collected packages: websockets, uvloop, urllib3, typing-extensions, threadpoolctl, sniffio, six, pyyaml, python-dotenv, pyasn1, protobuf, packaging, numpy, joblib, idna, httptools, h11, grpcio, google-crc32c, exceptiongroup, docstring-parser, click, charset-normalizer, certifi, cachetools, annotated-types, uvicorn, shapely, scipy, rsa, requests, python-dateutil, pydantic-core, pyasn1-modules, proto-plus, googleapis-common-protos, google-resumable-media, anyio, watchfiles, starlette, scikit-learn, pydantic, httpcore, grpcio-status, google-auth, docker, httpx, grpc-google-iam-v1, google-api-core, fastapi, google-cloud-core, google-cloud-storage, google-cloud-resource-manager, google-cloud-bigquery, google-cloud-aiplatform
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: websockets
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: websockets 15.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling websockets-15.0:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled websockets-15.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: uvloop
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: uvloop 0.21.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling uvloop-0.21.0:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled uvloop-0.21.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: urllib3
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: urllib3 2.3.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling urllib3-2.3.0:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled urllib3-2.3.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: typing-extensions
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: typing_extensions 4.12.2
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling typing_extensions-4.12.2:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled typing_extensions-4.12.2
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: sniffio
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: sniffio 1.3.1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling sniffio-1.3.1:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled sniffio-1.3.1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: six
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: six 1.17.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling six-1.17.0:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled six-1.17.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: pyyaml
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: PyYAML 6.0.2
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling PyYAML-6.0.2:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled PyYAML-6.0.2
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: python-dotenv
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: python-dotenv 1.0.1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling python-dotenv-1.0.1:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled python-dotenv-1.0.1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: pyasn1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: pyasn1 0.6.1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling pyasn1-0.6.1:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled pyasn1-0.6.1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: protobuf
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: protobuf 5.29.3
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling protobuf-5.29.3:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled protobuf-5.29.3
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: packaging
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: packaging 24.2
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling packaging-24.2:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled packaging-24.2
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: numpy
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: numpy 2.2.3
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling numpy-2.2.3:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled numpy-2.2.3
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: idna
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: idna 3.10
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling idna-3.10:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled idna-3.10
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: httptools
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: httptools 0.6.4
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling httptools-0.6.4:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled httptools-0.6.4
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: h11
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: h11 0.14.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling h11-0.14.0:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled h11-0.14.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: grpcio
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: grpcio 1.70.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling grpcio-1.70.0:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled grpcio-1.70.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: google-crc32c
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: google-crc32c 1.6.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling google-crc32c-1.6.0:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled google-crc32c-1.6.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: exceptiongroup
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: exceptiongroup 1.2.2
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling exceptiongroup-1.2.2:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled exceptiongroup-1.2.2
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: docstring-parser
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: docstring_parser 0.16
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling docstring_parser-0.16:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled docstring_parser-0.16
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: click
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: click 8.1.8
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling click-8.1.8:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled click-8.1.8
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: charset-normalizer
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: charset-normalizer 3.4.1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling charset-normalizer-3.4.1:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled charset-normalizer-3.4.1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: certifi
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: certifi 2025.1.31
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling certifi-2025.1.31:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled certifi-2025.1.31
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: cachetools
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: cachetools 5.5.2
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling cachetools-5.5.2:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled cachetools-5.5.2
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: annotated-types
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: annotated-types 0.7.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling annotated-types-0.7.0:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled annotated-types-0.7.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: uvicorn
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: uvicorn 0.34.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling uvicorn-0.34.0:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled uvicorn-0.34.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: shapely
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: shapely 2.0.7
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling shapely-2.0.7:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled shapely-2.0.7
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: rsa
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: rsa 4.9
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling rsa-4.9:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled rsa-4.9
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: requests
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: requests 2.32.3
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling requests-2.32.3:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled requests-2.32.3
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: python-dateutil
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: python-dateutil 2.9.0.post0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling python-dateutil-2.9.0.post0:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled python-dateutil-2.9.0.post0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: pydantic-core
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: pydantic_core 2.27.2
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling pydantic_core-2.27.2:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled pydantic_core-2.27.2
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: pyasn1-modules
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: pyasn1_modules 0.4.1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling pyasn1_modules-0.4.1:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled pyasn1_modules-0.4.1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: proto-plus
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: proto-plus 1.26.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling proto-plus-1.26.0:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled proto-plus-1.26.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: googleapis-common-protos
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: googleapis-common-protos 1.68.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling googleapis-common-protos-1.68.0:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled googleapis-common-protos-1.68.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: google-resumable-media
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: google-resumable-media 2.7.2
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling google-resumable-media-2.7.2:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled google-resumable-media-2.7.2
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: anyio
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: anyio 4.8.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling anyio-4.8.0:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled anyio-4.8.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: watchfiles
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: watchfiles 1.0.4
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling watchfiles-1.0.4:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled watchfiles-1.0.4
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: starlette
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: starlette 0.38.6
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling starlette-0.38.6:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled starlette-0.38.6
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: pydantic
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: pydantic 2.10.6
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling pydantic-2.10.6:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled pydantic-2.10.6
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: httpcore
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: httpcore 0.17.3
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling httpcore-0.17.3:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled httpcore-0.17.3
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: grpcio-status
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: grpcio-status 1.70.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling grpcio-status-1.70.0:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled grpcio-status-1.70.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: google-auth
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: google-auth 2.38.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling google-auth-2.38.0:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled google-auth-2.38.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: docker
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: docker 7.1.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling docker-7.1.0:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled docker-7.1.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: httpx
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: httpx 0.24.1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling httpx-0.24.1:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled httpx-0.24.1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: grpc-google-iam-v1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: grpc-google-iam-v1 0.14.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling grpc-google-iam-v1-0.14.0:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled grpc-google-iam-v1-0.14.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: google-api-core
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: google-api-core 2.24.1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling google-api-core-2.24.1:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled google-api-core-2.24.1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: fastapi
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: fastapi 0.114.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling fastapi-0.114.0:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled fastapi-0.114.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: google-cloud-core
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: google-cloud-core 2.4.2
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling google-cloud-core-2.4.2:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled google-cloud-core-2.4.2
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: google-cloud-storage
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: google-cloud-storage 2.19.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling google-cloud-storage-2.19.0:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled google-cloud-storage-2.19.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: google-cloud-resource-manager
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: google-cloud-resource-manager 1.14.1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling google-cloud-resource-manager-1.14.1:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled google-cloud-resource-manager-1.14.1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: google-cloud-bigquery
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: google-cloud-bigquery 3.30.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling google-cloud-bigquery-3.30.0:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled google-cloud-bigquery-3.30.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: google-cloud-aiplatform
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: google-cloud-aiplatform 1.82.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling google-cloud-aiplatform-1.82.0:
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled google-cloud-aiplatform-1.82.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully installed annotated-types-0.7.0 anyio-4.8.0 cachetools-5.5.2 certifi-2025.1.31 charset-normalizer-3.4.1 click-8.1.8 docker-7.1.0 docstring-parser-0.16 exceptiongroup-1.2.2 fastapi-0.114.0 google-api-core-2.24.1 google-auth-2.38.0 google-cloud-aiplatform-1.82.0 google-cloud-bigquery-3.30.0 google-cloud-core-2.4.2 google-cloud-resource-manager-1.14.1 google-cloud-storage-2.19.0 google-crc32c-1.6.0 google-resumable-media-2.7.2 googleapis-common-protos-1.68.0 grpc-google-iam-v1-0.14.0 grpcio-1.70.0 grpcio-status-1.70.0 h11-0.14.0 httpcore-0.17.3 httptools-0.6.4 httpx-0.24.1 idna-3.10 joblib-1.4.2 numpy-2.2.3 packaging-24.2 proto-plus-1.26.0 protobuf-5.29.3 pyasn1-0.6.1 pyasn1-modules-0.4.1 pydantic-2.10.6 pydantic-core-2.27.2 python-dateutil-2.9.0.post0 python-dotenv-1.0.1 pyyaml-6.0.2 requests-2.32.3 rsa-4.9 scikit-learn-1.6.1 scipy-1.15.2 shapely-2.0.7 six-1.17.0 sniffio-1.3.1 starlette-0.38.6 threadpoolctl-3.5.0 typing-extensions-4.12.2 urllib3-2.3.0 uvicorn-0.34.0 uvloop-0.21.0 watchfiles-1.0.4 websockets-15.0
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 [notice] A new release of pip is available: 23.0.1 -> 25.0.1
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 [notice] To update, run: pip install --upgrade pip
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Removing intermediate container 5c16d7912b65
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> e35a55d9bdb4
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully built e35a55d9bdb4
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully tagged gcr.io/ucaip-sample-tests/prediction-cpr/sklearn:20250227_233526
INFO google.cloud.aiplatform.prediction.local_endpoint:local_endpoint.py:237 Got the project id from the global config: ucaip-sample-tests.
INFO google.cloud.aiplatform.prediction.local_endpoint:local_endpoint.py:237 Got the project id from the global config: ucaip-sample-tests.
INFO root:test_prediction_cpr.py:90 CompletedProcess(args=['gcloud', 'auth', 'configure-docker'], returncode=0, stdout=b'', stderr=b'Adding credentials for all GCR repositories.\nWARNING: A long list of credential helpers may cause delays running \'docker build\'. We recommend passing the registry name to configure only the registry you are using.\nAfter update, the following will be written to your Docker config file located \nat [/root/.docker/config.json]:\n {\n "credHelpers": {\n "gcr.io": "gcloud",\n "us.gcr.io": "gcloud",\n "eu.gcr.io": "gcloud",\n "asia.gcr.io": "gcloud",\n "staging-k8s.gcr.io": "gcloud",\n "marketplace.gcr.io": "gcloud"\n }\n}\n\nDo you want to continue (Y/n)? \nDocker configuration file updated.\n')
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 The push refers to repository [gcr.io/ucaip-sample-tests/prediction-cpr/sklearn]
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 61d9712a39a8: Preparing
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 aa48bc8816f6: Preparing
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 e092c372e690: Preparing
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 7dc9c93f38ed: Preparing
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 fb29abb2209e: Preparing
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 784c5d2bb2c2: Preparing
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ecbadaa33ad9: Preparing
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 4b017a36fd9c: Preparing
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 20a9b386e10e: Preparing
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 f8217d7865d2: Preparing
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 01c9a2a5f237: Preparing
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 4b017a36fd9c: Waiting
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 20a9b386e10e: Waiting
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 f8217d7865d2: Waiting
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 01c9a2a5f237: Waiting
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 784c5d2bb2c2: Waiting
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ecbadaa33ad9: Waiting
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 error parsing HTTP 412 response body: invalid character 'C' looking for beginning of value: "Container Registry is deprecated and shutting down, please use the auto migration tool to migrate to Artifact Registry. For more details see: https://cloud.google.com/artifact-registry/docs/transition/auto-migrate-gcr-ar"
___________ TestExperimentModel.test_deploy_model_with_gpu_container ___________
[gw10] linux -- Python 3.10.15 /tmpfs/src/github/python-aiplatform/.nox/system-3-10/bin/python
self =
shared_state = {'bucket': , 'resources': [}
def test_deploy_model_with_gpu_container(self, shared_state):
aiplatform.init(
project=e2e_base._PROJECT,
location=e2e_base._LOCATION,
staging_bucket=f"gs://{shared_state['staging_bucket_name']}",
)
# It takes long time to deploy a model. To reduce the system test run
# time, we randomly choose one registered model to test deployment.
> registered_model = random.choice(self.registered_models_gpu)
tests/system/aiplatform/test_experiment_model.py:357:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = , seq = []
def choice(self, seq):
"""Choose a random element from a non-empty sequence."""
# raises IndexError if seq is empty
> return seq[self._randbelow(len(seq))]
E IndexError: list index out of range
/usr/local/lib/python3.10/random.py:378: IndexError
---------------------------- Captured log teardown -----------------------------
INFO google.cloud.aiplatform.models:base.py:189 Undeploying Endpoint model: projects/580378083368/locations/us-central1/endpoints/1158837976675909632
INFO google.cloud.aiplatform.models:base.py:209 Undeploy Endpoint model backing LRO: projects/580378083368/locations/us-central1/endpoints/1158837976675909632/operations/725088439178887168
INFO google.cloud.aiplatform.models:base.py:222 Endpoint model undeployed. Resource name: projects/580378083368/locations/us-central1/endpoints/1158837976675909632
INFO google.cloud.aiplatform.base:base.py:189 Deleting Endpoint : projects/580378083368/locations/us-central1/endpoints/1158837976675909632
INFO google.cloud.aiplatform.base:base.py:222 Endpoint deleted. . Resource name: projects/580378083368/locations/us-central1/endpoints/1158837976675909632
INFO google.cloud.aiplatform.base:base.py:156 Deleting Endpoint resource: projects/580378083368/locations/us-central1/endpoints/1158837976675909632
INFO google.cloud.aiplatform.base:base.py:161 Delete Endpoint backing LRO: projects/580378083368/locations/us-central1/operations/8724607277295730688
INFO google.cloud.aiplatform.base:base.py:174 Endpoint resource projects/580378083368/locations/us-central1/endpoints/1158837976675909632 deleted.
INFO google.cloud.aiplatform.base:base.py:189 Deleting ExperimentModel : projects/580378083368/locations/us-central1/metadataStores/default/artifacts/sk-model
INFO google.cloud.aiplatform.base:base.py:222 ExperimentModel deleted. . Resource name: projects/580378083368/locations/us-central1/metadataStores/default/artifacts/sk-model
INFO google.cloud.aiplatform.base:base.py:156 Deleting ExperimentModel resource: projects/580378083368/locations/us-central1/metadataStores/default/artifacts/sk-model
INFO google.cloud.aiplatform.base:base.py:161 Delete ExperimentModel backing LRO: projects/580378083368/locations/us-central1/metadataStores/default/artifacts/sk-model/operations/8008534936543821824
INFO google.cloud.aiplatform.base:base.py:174 ExperimentModel resource projects/580378083368/locations/us-central1/metadataStores/default/artifacts/sk-model deleted.
INFO google.cloud.aiplatform.base:base.py:189 Deleting Model : projects/580378083368/locations/us-central1/models/7855593865552592896
INFO google.cloud.aiplatform.base:base.py:222 Model deleted. . Resource name: projects/580378083368/locations/us-central1/models/7855593865552592896
INFO google.cloud.aiplatform.base:base.py:156 Deleting Model resource: projects/580378083368/locations/us-central1/models/7855593865552592896
INFO google.cloud.aiplatform.base:base.py:161 Delete Model backing LRO: projects/580378083368/locations/us-central1/models/7855593865552592896/operations/736347438247313408
INFO google.cloud.aiplatform.base:base.py:174 Model resource projects/580378083368/locations/us-central1/models/7855593865552592896 deleted.
_ TestGenerativeModels.test_generate_content_function_calling[grpc-PROD_ENDPOINT] _
[gw11] linux -- Python 3.10.15 /tmpfs/src/github/python-aiplatform/.nox/system-3-10/bin/python
self =
api_endpoint_env_name = 'PROD_ENDPOINT'
def test_generate_content_function_calling(self, api_endpoint_env_name):
get_current_weather_func = generative_models.FunctionDeclaration(
name="get_current_weather",
description="Get the current weather in a given location",
parameters=_REQUEST_FUNCTION_PARAMETER_SCHEMA_STRUCT,
)
weather_tool = generative_models.Tool(
function_declarations=[get_current_weather_func],
)
model = generative_models.GenerativeModel(
GEMINI_MODEL_NAME,
# Specifying the tools once to avoid specifying them in every request
tools=[weather_tool],
)
# Define the user's prompt in a Content object that we can reuse in model calls
prompt = "What is the weather like in Boston?"
user_prompt_content = generative_models.Content(
role="user",
parts=[
generative_models.Part.from_text(prompt),
],
)
# Send the prompt and instruct the model to generate content using the Tool
response = model.generate_content(
user_prompt_content,
generation_config={"temperature": 0},
tools=[weather_tool],
)
response_function_call_content = response.candidates[0].content
assert (
response.candidates[0].content.parts[0].function_call.name
== "get_current_weather"
)
assert response.candidates[0].function_calls[0].args["location"]
assert len(response.candidates[0].function_calls) == 1
> assert (
response.candidates[0].function_calls[0]
== response.candidates[0].content.parts[0].function_call
)
E assert name: "get_current_weather"\nargs {\n fields {\n key: "location"\n value {\n string_value: "Boston, MA"\n }\n }\n}\n == name: "get_current_weather"\nargs {\n fields {\n key: "location"\n value {\n string_value: "Boston, MA"\n }\n }\n}\n
E + where name: "get_current_weather"\nargs {\n fields {\n key: "location"\n value {\n string_value: "Boston, MA"\n }\n }\n}\n = function_call {\n name: "get_current_weather"\n args {\n fields {\n key: "location"\n value {\n string_value: "Boston, MA"\n }\n }\n }\n}\n.function_call
tests/system/vertexai/test_generative_models.py:565: AssertionError
_ TestGenerativeModels.test_generate_content_function_calling[rest-PROD_ENDPOINT] _
[gw11] linux -- Python 3.10.15 /tmpfs/src/github/python-aiplatform/.nox/system-3-10/bin/python
self =
api_endpoint_env_name = 'PROD_ENDPOINT'
def test_generate_content_function_calling(self, api_endpoint_env_name):
get_current_weather_func = generative_models.FunctionDeclaration(
name="get_current_weather",
description="Get the current weather in a given location",
parameters=_REQUEST_FUNCTION_PARAMETER_SCHEMA_STRUCT,
)
weather_tool = generative_models.Tool(
function_declarations=[get_current_weather_func],
)
model = generative_models.GenerativeModel(
GEMINI_MODEL_NAME,
# Specifying the tools once to avoid specifying them in every request
tools=[weather_tool],
)
# Define the user's prompt in a Content object that we can reuse in model calls
prompt = "What is the weather like in Boston?"
user_prompt_content = generative_models.Content(
role="user",
parts=[
generative_models.Part.from_text(prompt),
],
)
# Send the prompt and instruct the model to generate content using the Tool
response = model.generate_content(
user_prompt_content,
generation_config={"temperature": 0},
tools=[weather_tool],
)
response_function_call_content = response.candidates[0].content
assert (
response.candidates[0].content.parts[0].function_call.name
== "get_current_weather"
)
assert response.candidates[0].function_calls[0].args["location"]
assert len(response.candidates[0].function_calls) == 1
> assert (
response.candidates[0].function_calls[0]
== response.candidates[0].content.parts[0].function_call
)
E assert name: "get_current_weather"\nargs {\n fields {\n key: "location"\n value {\n string_value: "Boston, MA"\n }\n }\n}\n == name: "get_current_weather"\nargs {\n fields {\n key: "location"\n value {\n string_value: "Boston, MA"\n }\n }\n}\n
E + where name: "get_current_weather"\nargs {\n fields {\n key: "location"\n value {\n string_value: "Boston, MA"\n }\n }\n}\n = function_call {\n name: "get_current_weather"\n args {\n fields {\n key: "location"\n value {\n string_value: "Boston, MA"\n }\n }\n }\n}\n.function_call
tests/system/vertexai/test_generative_models.py:565: AssertionError
_ TestTokenization.test_count_tokens_content_is_function_response[gemini-1.0-pro-PROD_ENDPOINT] _
[gw14] linux -- Python 3.10.15 /tmpfs/src/github/python-aiplatform/.nox/system-3-10/bin/python
self =
model_name = 'gemini-1.0-pro'
@pytest.mark.parametrize(
"model_name",
_MODELS,
)
def test_count_tokens_content_is_function_response(self, model_name):
part = Part._from_gapic(
gapic_content_types.Part(function_response=_FUNCTION_RESPONSE)
)
tokenizer = tokenizer_preview(model_name)
model = GenerativeModel(model_name)
assert tokenizer.count_tokens(part).total_tokens
> assert (
tokenizer.count_tokens(part).total_tokens
== model.count_tokens(part).total_tokens
)
E assert 7 == 0
E + where 7 = CountTokensResult(total_tokens=7).total_tokens
E + where CountTokensResult(total_tokens=7) = count_tokens(function_response {\n name: "function_response"\n response {\n fields {\n key: "string_key"\n value {\n string_value: "value"\n }\n }\n }\n}\n)
E + where count_tokens = .count_tokens
E + and 0 = total_billable_characters: 32\nprompt_tokens_details {\n modality: TEXT\n}\n.total_tokens
E + where total_billable_characters: 32\nprompt_tokens_details {\n modality: TEXT\n}\n = count_tokens(function_response {\n name: "function_response"\n response {\n fields {\n key: "string_key"\n value {\n string_value: "value"\n }\n }\n }\n}\n)
E + where count_tokens = .count_tokens
tests/system/vertexai/test_tokenization.py:284: AssertionError
_ TestTokenization.test_count_tokens_content_is_function_response[gemini-1.5-pro-PROD_ENDPOINT] _
[gw14] linux -- Python 3.10.15 /tmpfs/src/github/python-aiplatform/.nox/system-3-10/bin/python
self =
model_name = 'gemini-1.5-pro'
@pytest.mark.parametrize(
"model_name",
_MODELS,
)
def test_count_tokens_content_is_function_response(self, model_name):
part = Part._from_gapic(
gapic_content_types.Part(function_response=_FUNCTION_RESPONSE)
)
tokenizer = tokenizer_preview(model_name)
model = GenerativeModel(model_name)
assert tokenizer.count_tokens(part).total_tokens
> assert (
tokenizer.count_tokens(part).total_tokens
== model.count_tokens(part).total_tokens
)
E assert 7 == 0
E + where 7 = CountTokensResult(total_tokens=7).total_tokens
E + where CountTokensResult(total_tokens=7) = count_tokens(function_response {\n name: "function_response"\n response {\n fields {\n key: "string_key"\n value {\n string_value: "value"\n }\n }\n }\n}\n)
E + where count_tokens = .count_tokens
E + and 0 = total_billable_characters: 32\nprompt_tokens_details {\n modality: TEXT\n}\n.total_tokens
E + where total_billable_characters: 32\nprompt_tokens_details {\n modality: TEXT\n}\n = count_tokens(function_response {\n name: "function_response"\n response {\n fields {\n key: "string_key"\n value {\n string_value: "value"\n }\n }\n }\n}\n)
E + where count_tokens = .count_tokens
tests/system/vertexai/test_tokenization.py:284: AssertionError
_ TestTokenization.test_count_tokens_content_is_function_response[gemini-1.5-flash-PROD_ENDPOINT] _
[gw14] linux -- Python 3.10.15 /tmpfs/src/github/python-aiplatform/.nox/system-3-10/bin/python
self =
model_name = 'gemini-1.5-flash'
@pytest.mark.parametrize(
"model_name",
_MODELS,
)
def test_count_tokens_content_is_function_response(self, model_name):
part = Part._from_gapic(
gapic_content_types.Part(function_response=_FUNCTION_RESPONSE)
)
tokenizer = tokenizer_preview(model_name)
model = GenerativeModel(model_name)
assert tokenizer.count_tokens(part).total_tokens
> assert (
tokenizer.count_tokens(part).total_tokens
== model.count_tokens(part).total_tokens
)
E assert 7 == 0
E + where 7 = CountTokensResult(total_tokens=7).total_tokens
E + where CountTokensResult(total_tokens=7) = count_tokens(function_response {\n name: "function_response"\n response {\n fields {\n key: "string_key"\n value {\n string_value: "value"\n }\n }\n }\n}\n)
E + where count_tokens = .count_tokens
E + and 0 = total_billable_characters: 32\nprompt_tokens_details {\n modality: TEXT\n}\n.total_tokens
E + where total_billable_characters: 32\nprompt_tokens_details {\n modality: TEXT\n}\n = count_tokens(function_response {\n name: "function_response"\n response {\n fields {\n key: "string_key"\n value {\n string_value: "value"\n }\n }\n }\n}\n)
E + where count_tokens = .count_tokens
tests/system/vertexai/test_tokenization.py:284: AssertionError
_ TestTokenization.test_count_tokens_content_is_function_response[gemini-1.5-flash-002-PROD_ENDPOINT] _
[gw14] linux -- Python 3.10.15 /tmpfs/src/github/python-aiplatform/.nox/system-3-10/bin/python
self =
model_name = 'gemini-1.5-flash-002'
@pytest.mark.parametrize(
"model_name",
_MODELS,
)
def test_count_tokens_content_is_function_response(self, model_name):
part = Part._from_gapic(
gapic_content_types.Part(function_response=_FUNCTION_RESPONSE)
)
tokenizer = tokenizer_preview(model_name)
model = GenerativeModel(model_name)
assert tokenizer.count_tokens(part).total_tokens
> assert (
tokenizer.count_tokens(part).total_tokens
== model.count_tokens(part).total_tokens
)
E assert 7 == 0
E + where 7 = CountTokensResult(total_tokens=7).total_tokens
E + where CountTokensResult(total_tokens=7) = count_tokens(function_response {\n name: "function_response"\n response {\n fields {\n key: "string_key"\n value {\n string_value: "value"\n }\n }\n }\n}\n)
E + where count_tokens = .count_tokens
E + and 0 = total_billable_characters: 32\nprompt_tokens_details {\n modality: TEXT\n}\n.total_tokens
E + where total_billable_characters: 32\nprompt_tokens_details {\n modality: TEXT\n}\n = count_tokens(function_response {\n name: "function_response"\n response {\n fields {\n key: "string_key"\n value {\n string_value: "value"\n }\n }\n }\n}\n)
E + where count_tokens = .count_tokens
tests/system/vertexai/test_tokenization.py:284: AssertionError
_ TestTokenization.test_count_tokens_content_is_function_response[gemini-1.5-pro-002-PROD_ENDPOINT] _
[gw14] linux -- Python 3.10.15 /tmpfs/src/github/python-aiplatform/.nox/system-3-10/bin/python
self =
model_name = 'gemini-1.5-pro-002'
@pytest.mark.parametrize(
"model_name",
_MODELS,
)
def test_count_tokens_content_is_function_response(self, model_name):
part = Part._from_gapic(
gapic_content_types.Part(function_response=_FUNCTION_RESPONSE)
)
tokenizer = tokenizer_preview(model_name)
model = GenerativeModel(model_name)
assert tokenizer.count_tokens(part).total_tokens
> assert (
tokenizer.count_tokens(part).total_tokens
== model.count_tokens(part).total_tokens
)
E assert 7 == 0
E + where 7 = CountTokensResult(total_tokens=7).total_tokens
E + where CountTokensResult(total_tokens=7) = count_tokens(function_response {\n name: "function_response"\n response {\n fields {\n key: "string_key"\n value {\n string_value: "value"\n }\n }\n }\n}\n)
E + where count_tokens = .count_tokens
E + and 0 = total_billable_characters: 32\nprompt_tokens_details {\n modality: TEXT\n}\n.total_tokens
E + where total_billable_characters: 32\nprompt_tokens_details {\n modality: TEXT\n}\n = count_tokens(function_response {\n name: "function_response"\n response {\n fields {\n key: "string_key"\n value {\n string_value: "value"\n }\n }\n }\n}\n)
E + where count_tokens = .count_tokens
tests/system/vertexai/test_tokenization.py:284: AssertionError
________________________ TestRayData.test_ray_data[2.9] ________________________
[gw13] linux -- Python 3.10.15 /tmpfs/src/github/python-aiplatform/.nox/system-3-10/bin/python
self =
cluster_ray_version = '2.9'
@pytest.mark.parametrize("cluster_ray_version", ["2.9", "2.33"])
def test_ray_data(self, cluster_ray_version):
head_node_type = vertex_ray.Resources()
worker_node_types = [
vertex_ray.Resources(),
vertex_ray.Resources(),
vertex_ray.Resources(),
]
assert ray.__version__ == RAY_VERSION
aiplatform.init(project=PROJECT_ID, location="us-central1")
timestamp = datetime.datetime.now().strftime("%Y-%m-%d-%H-%M-%S")
# Create cluster, get dashboard address
> cluster_resource_name = vertex_ray.create_ray_cluster(
head_node_type=head_node_type,
worker_node_types=worker_node_types,
cluster_name=f"ray-cluster-{timestamp}-test-ray-data",
ray_version=cluster_ray_version,
)
tests/system/vertex_ray/test_ray_data.py:100:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
google/cloud/aiplatform/vertex_ray/cluster_init.py:373: in create_ray_cluster
response = _gapic_utils.get_persistent_resource(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
persistent_resource_name = 'projects/580378083368/locations/us-central1/persistentResources/ray-cluster-2025-02-27-23-43-54-test-ray-data'
tolerance = 1
def get_persistent_resource(
persistent_resource_name: str, tolerance: Optional[int] = 0
):
"""Get persistent resource.
Args:
persistent_resource_name:
"projects//locations//persistentResources/".
tolerance: number of attemps to get persistent resource.
Returns:
aiplatform_v1.PersistentResource if state is RUNNING.
Raises:
ValueError: Invalid cluster resource name.
RuntimeError: Service returns error.
RuntimeError: Cluster resource state is STOPPING.
RuntimeError: Cluster resource state is ERROR.
"""
client = create_persistent_resource_client()
request = GetPersistentResourceRequest(name=persistent_resource_name)
# TODO(b/277117901): Add test cases for polling and error handling
num_attempts = 0
while True:
try:
response = client.get_persistent_resource(request)
except exceptions.NotFound:
response = None
if num_attempts >= tolerance:
raise ValueError(
"[Ray on Vertex AI]: Invalid cluster_resource_name (404 not found)."
)
if response:
if response.error.message:
logging.error("[Ray on Vertex AI]: %s" % response.error.message)
> raise RuntimeError("[Ray on Vertex AI]: Cluster returned an error.")
E RuntimeError: [Ray on Vertex AI]: Cluster returned an error.
google/cloud/aiplatform/vertex_ray/util/_gapic_utils.py:115: RuntimeError
----------------------------- Captured stdout call -----------------------------
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 1; sleeping for 0:02:30 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 2; sleeping for 0:01:54.750000 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 3; sleeping for 0:01:27.783750 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 4; sleeping for 0:01:07.154569 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 5; sleeping for 0:00:51.373245 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 6; sleeping for 0:00:39.300532 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 7; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 8; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 9; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 10; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 11; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 12; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 13; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 14; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 15; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 16; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 17; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 18; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 19; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 20; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 21; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 22; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 23; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 24; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 25; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 26; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 27; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 28; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 29; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 30; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 31; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 32; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 33; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 34; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 35; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 36; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 37; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 38; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 39; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 40; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 41; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 42; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 43; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 44; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 45; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 46; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 47; sleeping for 0:00:30.064907 seconds
------------------------------ Captured log call -------------------------------
ERROR root:_gapic_utils.py:114 [Ray on Vertex AI]: An internal error occurred on your cluster. Please try recreating one in a few minutes. If you still experience errors, contact Cloud AI Platform.
_______________________ TestRayData.test_ray_data[2.33] ________________________
[gw13] linux -- Python 3.10.15 /tmpfs/src/github/python-aiplatform/.nox/system-3-10/bin/python
args = (parent: "projects/580378083368/locations/us-central1"
persistent_resource {
resource_pools {
id: "head-node"
... }
ray_logs_spec {
}
}
}
}
persistent_resource_id: "ray-cluster-2025-02-28-00-13-06-test-ray-data"
,)
kwargs = {'metadata': [('x-goog-request-params', 'parent=projects/580378083368/locations/us-central1'), ('x-goog-api-client', '.../1.82.0+vertex_ray+top_google_constructor_method+google.cloud.aiplatform.vertex_ray.cluster_init.create_ray_cluster')]}
@functools.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
try:
> return callable_(*args, **kwargs)
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/grpc_helpers.py:76:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
.nox/system-3-10/lib/python3.10/site-packages/grpc/_interceptor.py:247: in __call__
response, ignored_call = self._with_call(request,
.nox/system-3-10/lib/python3.10/site-packages/grpc/_interceptor.py:290: in _with_call
return call.result(), call
.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:343: in result
raise self
.nox/system-3-10/lib/python3.10/site-packages/grpc/_interceptor.py:274: in continuation
response, call = self._thunk(new_method).with_call(
.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:957: in with_call
return _end_unary_response_blocking(state, call, True, None)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
state =
call =
with_call = True, deadline = None
def _end_unary_response_blocking(state, call, with_call, deadline):
if state.code is grpc.StatusCode.OK:
if with_call:
rendezvous = _MultiThreadedRendezvous(state, call, None, deadline)
return state.response, rendezvous
else:
return state.response
else:
> raise _InactiveRpcError(state)
E grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
E status = StatusCode.FAILED_PRECONDITION
E details = "You have already provisioned the maximum number of PersistentResources in this region. Please switch to a different region or delete one or more PersistentResources in this region before creating another."
E debug_error_string = "UNKNOWN:Error received from peer ipv4:142.250.107.95:443 {created_time:"2025-02-28T00:13:07.430087541+00:00", grpc_status:9, grpc_message:"You have already provisioned the maximum number of PersistentResources in this region. Please switch to a different region or delete one or more PersistentResources in this region before creating another."}"
E >
.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:849: _InactiveRpcError
The above exception was the direct cause of the following exception:
head_node_type = Resources(machine_type='n1-standard-16', node_count=1, accelerator_type=None, accelerator_count=0, boot_disk_type='pd-ssd', boot_disk_size_gb=100, custom_image=None, autoscaling_spec=None)
python_version = '3.10', ray_version = '2.33', network = None
service_account = None
cluster_name = 'ray-cluster-2025-02-28-00-13-06-test-ray-data'
worker_node_types = [Resources(machine_type='n1-standard-16', node_count=1, accelerator_type=None, accelerator_count=0, boot_disk_type='pd...e=None, accelerator_count=0, boot_disk_type='pd-ssd', boot_disk_size_gb=100, custom_image=None, autoscaling_spec=None)]
custom_images = None, enable_metrics_collection = True, enable_logging = True
psc_interface_config = None, reserved_ip_ranges = None, nfs_mounts = None
labels = None
def create_ray_cluster(
head_node_type: Optional[resources.Resources] = resources.Resources(),
python_version: Optional[str] = "3.10",
ray_version: Optional[str] = "2.33",
network: Optional[str] = None,
service_account: Optional[str] = None,
cluster_name: Optional[str] = None,
worker_node_types: Optional[List[resources.Resources]] = [resources.Resources()],
custom_images: Optional[resources.NodeImages] = None,
enable_metrics_collection: Optional[bool] = True,
enable_logging: Optional[bool] = True,
psc_interface_config: Optional[resources.PscIConfig] = None,
reserved_ip_ranges: Optional[List[str]] = None,
nfs_mounts: Optional[List[resources.NfsMount]] = None,
labels: Optional[Dict[str, str]] = None,
) -> str:
"""Create a ray cluster on the Vertex AI.
Sample usage:
from vertex_ray import Resources
head_node_type = Resources(
machine_type="n1-standard-8",
node_count=1,
accelerator_type="NVIDIA_TESLA_K80",
accelerator_count=1,
custom_image="us-docker.pkg.dev/my-project/ray-cpu-image.2.9:latest", # Optional
)
worker_node_types = [Resources(
machine_type="n1-standard-8",
node_count=2,
accelerator_type="NVIDIA_TESLA_K80",
accelerator_count=1,
custom_image="us-docker.pkg.dev/my-project/ray-gpu-image.2.9:latest", # Optional
)]
cluster_resource_name = vertex_ray.create_ray_cluster(
head_node_type=head_node_type,
network="projects/my-project-number/global/networks/my-vpc-name", # Optional
service_account="my-service-account@my-project-number.iam.gserviceaccount.com", # Optional
cluster_name="my-cluster-name", # Optional
worker_node_types=worker_node_types,
ray_version="2.9",
)
After a ray cluster is set up, you can call
`ray.init(f"vertex_ray://{cluster_resource_name}", runtime_env=...)` without
specifying ray cluster address to connect to the cluster. To shut down the
cluster you can call `ray.delete_ray_cluster()`.
Note: If the active ray cluster has not finished shutting down, you cannot
create a new ray cluster with the same cluster_name.
Args:
head_node_type: The head node resource. Resources.node_count must be 1.
If not set, default value of Resources() class will be used.
python_version: Python version for the ray cluster.
ray_version: Ray version for the ray cluster. Default is 2.33.0.
network: Virtual private cloud (VPC) network. For Ray Client, VPC
peering is required to connect to the Ray Cluster managed in the
Vertex API service. For Ray Job API, VPC network is not required
because Ray Cluster connection can be accessed through dashboard
address.
service_account: Service account to be used for running Ray programs on
the cluster.
cluster_name: This value may be up to 63 characters, and valid
characters are `[a-z0-9_-]`. The first character cannot be a number
or hyphen.
worker_node_types: The list of Resources of the worker nodes. The same
Resources object should not appear multiple times in the list.
custom_images: The NodeImages which specifies head node and worker nodes
images. All the workers will share the same image. If each Resource
has a specific custom image, use `Resources.custom_image` for
head/worker_node_type(s). Note that configuring `Resources.custom_image`
will override `custom_images` here. Allowlist only.
enable_metrics_collection: Enable Ray metrics collection for visualization.
enable_logging: Enable exporting Ray logs to Cloud Logging.
psc_interface_config: PSC-I config.
reserved_ip_ranges: A list of names for the reserved IP ranges under
the VPC network that can be used for this cluster. If set, we will
deploy the cluster within the provided IP ranges. Otherwise, the
cluster is deployed to any IP ranges under the provided VPC network.
Example: ["vertex-ai-ip-range"].
labels:
The labels with user-defined metadata to organize Ray cluster.
Label keys and values can be no longer than 64 characters (Unicode
codepoints), can only contain lowercase letters, numeric characters,
underscores and dashes. International characters are allowed.
See https://goo.gl/xmQnxf for more information and examples of labels.
Returns:
The cluster_resource_name of the initiated Ray cluster on Vertex.
Raise:
ValueError: If the cluster is not created successfully.
RuntimeError: If the ray_version is 2.4.
"""
if network is None:
logging.info(
"[Ray on Vertex]: No VPC network configured. It is required for client connection."
)
if ray_version == "2.4":
raise RuntimeError(_V2_4_WARNING_MESSAGE)
if ray_version == "2.9.3":
warnings.warn(_V2_9_WARNING_MESSAGE, DeprecationWarning, stacklevel=1)
local_ray_verion = _validation_utils.get_local_ray_version()
if ray_version != local_ray_verion:
if custom_images is None and head_node_type.custom_image is None:
install_ray_version = "2.33.0"
logging.info(
"[Ray on Vertex]: Local runtime has Ray version %s"
", but the requested cluster runtime has %s. Please "
"ensure that the Ray versions match for client connectivity. You may "
'"pip install --user --force-reinstall ray[default]==%s"'
" and restart runtime before cluster connection."
% (local_ray_verion, ray_version, install_ray_version)
)
else:
logging.info(
"[Ray on Vertex]: Local runtime has Ray version %s."
"Please ensure that the Ray versions match for client connectivity."
% local_ray_verion
)
if cluster_name is None:
cluster_name = "ray-cluster-" + utils.timestamped_unique_name()
if head_node_type:
if head_node_type.node_count != 1:
raise ValueError(
"[Ray on Vertex AI]: For head_node_type, "
+ "Resources.node_count must be 1."
)
if head_node_type.autoscaling_spec is not None:
raise ValueError(
"[Ray on Vertex AI]: For head_node_type, "
+ "Resources.autoscaling_spec must be None."
)
if (
head_node_type.accelerator_type is None
and head_node_type.accelerator_count > 0
):
raise ValueError(
"[Ray on Vertex]: accelerator_type must be specified when"
+ " accelerator_count is set to a value other than 0."
)
resource_pool_images = {}
# head node
resource_pool_0 = ResourcePool()
resource_pool_0.id = "head-node"
resource_pool_0.replica_count = head_node_type.node_count
resource_pool_0.machine_spec.machine_type = head_node_type.machine_type
resource_pool_0.machine_spec.accelerator_count = head_node_type.accelerator_count
resource_pool_0.machine_spec.accelerator_type = head_node_type.accelerator_type
resource_pool_0.disk_spec.boot_disk_type = head_node_type.boot_disk_type
resource_pool_0.disk_spec.boot_disk_size_gb = head_node_type.boot_disk_size_gb
enable_cuda = True if head_node_type.accelerator_count > 0 else False
if head_node_type.custom_image is not None:
image_uri = head_node_type.custom_image
elif custom_images is None:
image_uri = _validation_utils.get_image_uri(
ray_version, python_version, enable_cuda
)
elif custom_images.head is not None and custom_images.worker is not None:
image_uri = custom_images.head
else:
raise ValueError(
"[Ray on Vertex AI]: custom_images.head and custom_images.worker must be specified when custom_images is set."
)
resource_pool_images[resource_pool_0.id] = image_uri
worker_pools = []
i = 0
if worker_node_types:
for worker_node_type in worker_node_types:
if (
worker_node_type.accelerator_type is None
and worker_node_type.accelerator_count > 0
):
raise ValueError(
"[Ray on Vertex]: accelerator_type must be specified when"
+ " accelerator_count is set to a value other than 0."
)
additional_replica_count = resources._check_machine_spec_identical(
head_node_type, worker_node_type
)
if worker_node_type.autoscaling_spec is None:
# Worker and head share the same MachineSpec, merge them into the
# same ResourcePool
resource_pool_0.replica_count = (
resource_pool_0.replica_count + additional_replica_count
)
else:
if additional_replica_count > 0:
# Autoscaling for single ResourcePool (homogeneous cluster).
resource_pool_0.replica_count = None
resource_pool_0.autoscaling_spec.min_replica_count = (
worker_node_type.autoscaling_spec.min_replica_count
)
resource_pool_0.autoscaling_spec.max_replica_count = (
worker_node_type.autoscaling_spec.max_replica_count
)
if additional_replica_count == 0:
resource_pool = ResourcePool()
resource_pool.id = f"worker-pool{i+1}"
if worker_node_type.autoscaling_spec is None:
resource_pool.replica_count = worker_node_type.node_count
else:
# Autoscaling for worker ResourcePool.
resource_pool.autoscaling_spec.min_replica_count = (
worker_node_type.autoscaling_spec.min_replica_count
)
resource_pool.autoscaling_spec.max_replica_count = (
worker_node_type.autoscaling_spec.max_replica_count
)
resource_pool.machine_spec.machine_type = worker_node_type.machine_type
resource_pool.machine_spec.accelerator_count = (
worker_node_type.accelerator_count
)
resource_pool.machine_spec.accelerator_type = (
worker_node_type.accelerator_type
)
resource_pool.disk_spec.boot_disk_type = worker_node_type.boot_disk_type
resource_pool.disk_spec.boot_disk_size_gb = (
worker_node_type.boot_disk_size_gb
)
worker_pools.append(resource_pool)
enable_cuda = True if worker_node_type.accelerator_count > 0 else False
if worker_node_type.custom_image is not None:
image_uri = worker_node_type.custom_image
elif custom_images is None:
image_uri = _validation_utils.get_image_uri(
ray_version, python_version, enable_cuda
)
else:
image_uri = custom_images.worker
resource_pool_images[resource_pool.id] = image_uri
i += 1
resource_pools = [resource_pool_0] + worker_pools
metrics_collection_disabled = not enable_metrics_collection
ray_metric_spec = RayMetricSpec(disabled=metrics_collection_disabled)
logging_disabled = not enable_logging
ray_logs_spec = RayLogsSpec(disabled=logging_disabled)
ray_spec = RaySpec(
resource_pool_images=resource_pool_images,
ray_metric_spec=ray_metric_spec,
ray_logs_spec=ray_logs_spec,
)
if nfs_mounts:
gapic_nfs_mounts = []
for nfs_mount in nfs_mounts:
gapic_nfs_mounts.append(
NfsMount(
server=nfs_mount.server,
path=nfs_mount.path,
mount_point=nfs_mount.mount_point,
)
)
ray_spec.nfs_mounts = gapic_nfs_mounts
if service_account:
service_account_spec = ServiceAccountSpec(
enable_custom_service_account=True,
service_account=service_account,
)
resource_runtime_spec = ResourceRuntimeSpec(
ray_spec=ray_spec,
service_account_spec=service_account_spec,
)
else:
resource_runtime_spec = ResourceRuntimeSpec(ray_spec=ray_spec)
if psc_interface_config:
gapic_psc_interface_config = PscInterfaceConfig(
network_attachment=psc_interface_config.network_attachment,
)
else:
gapic_psc_interface_config = None
persistent_resource = PersistentResource(
resource_pools=resource_pools,
network=network,
labels=labels,
resource_runtime_spec=resource_runtime_spec,
psc_interface_config=gapic_psc_interface_config,
reserved_ip_ranges=reserved_ip_ranges,
)
location = initializer.global_config.location
project_id = initializer.global_config.project
project_number = resource_manager_utils.get_project_number(project_id)
parent = f"projects/{project_number}/locations/{location}"
request = persistent_resource_service.CreatePersistentResourceRequest(
parent=parent,
persistent_resource=persistent_resource,
persistent_resource_id=cluster_name,
)
client = _gapic_utils.create_persistent_resource_client()
try:
> _ = client.create_persistent_resource(request)
google/cloud/aiplatform/vertex_ray/cluster_init.py:367:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
google/cloud/aiplatform_v1beta1/services/persistent_resource_service/client.py:1006: in create_persistent_resource
response = rpc(
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/gapic_v1/method.py:131: in __call__
return wrapped_func(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
args = (parent: "projects/580378083368/locations/us-central1"
persistent_resource {
resource_pools {
id: "head-node"
... }
ray_logs_spec {
}
}
}
}
persistent_resource_id: "ray-cluster-2025-02-28-00-13-06-test-ray-data"
,)
kwargs = {'metadata': [('x-goog-request-params', 'parent=projects/580378083368/locations/us-central1'), ('x-goog-api-client', '.../1.82.0+vertex_ray+top_google_constructor_method+google.cloud.aiplatform.vertex_ray.cluster_init.create_ray_cluster')]}
@functools.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
try:
return callable_(*args, **kwargs)
except grpc.RpcError as exc:
> raise exceptions.from_grpc_error(exc) from exc
E google.api_core.exceptions.FailedPrecondition: 400 You have already provisioned the maximum number of PersistentResources in this region. Please switch to a different region or delete one or more PersistentResources in this region before creating another.
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/grpc_helpers.py:78: FailedPrecondition
The above exception was the direct cause of the following exception:
self =
cluster_ray_version = '2.33'
@pytest.mark.parametrize("cluster_ray_version", ["2.9", "2.33"])
def test_ray_data(self, cluster_ray_version):
head_node_type = vertex_ray.Resources()
worker_node_types = [
vertex_ray.Resources(),
vertex_ray.Resources(),
vertex_ray.Resources(),
]
assert ray.__version__ == RAY_VERSION
aiplatform.init(project=PROJECT_ID, location="us-central1")
timestamp = datetime.datetime.now().strftime("%Y-%m-%d-%H-%M-%S")
# Create cluster, get dashboard address
> cluster_resource_name = vertex_ray.create_ray_cluster(
head_node_type=head_node_type,
worker_node_types=worker_node_types,
cluster_name=f"ray-cluster-{timestamp}-test-ray-data",
ray_version=cluster_ray_version,
)
tests/system/vertex_ray/test_ray_data.py:100:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
head_node_type = Resources(machine_type='n1-standard-16', node_count=1, accelerator_type=None, accelerator_count=0, boot_disk_type='pd-ssd', boot_disk_size_gb=100, custom_image=None, autoscaling_spec=None)
python_version = '3.10', ray_version = '2.33', network = None
service_account = None
cluster_name = 'ray-cluster-2025-02-28-00-13-06-test-ray-data'
worker_node_types = [Resources(machine_type='n1-standard-16', node_count=1, accelerator_type=None, accelerator_count=0, boot_disk_type='pd...e=None, accelerator_count=0, boot_disk_type='pd-ssd', boot_disk_size_gb=100, custom_image=None, autoscaling_spec=None)]
custom_images = None, enable_metrics_collection = True, enable_logging = True
psc_interface_config = None, reserved_ip_ranges = None, nfs_mounts = None
labels = None
def create_ray_cluster(
head_node_type: Optional[resources.Resources] = resources.Resources(),
python_version: Optional[str] = "3.10",
ray_version: Optional[str] = "2.33",
network: Optional[str] = None,
service_account: Optional[str] = None,
cluster_name: Optional[str] = None,
worker_node_types: Optional[List[resources.Resources]] = [resources.Resources()],
custom_images: Optional[resources.NodeImages] = None,
enable_metrics_collection: Optional[bool] = True,
enable_logging: Optional[bool] = True,
psc_interface_config: Optional[resources.PscIConfig] = None,
reserved_ip_ranges: Optional[List[str]] = None,
nfs_mounts: Optional[List[resources.NfsMount]] = None,
labels: Optional[Dict[str, str]] = None,
) -> str:
"""Create a ray cluster on the Vertex AI.
Sample usage:
from vertex_ray import Resources
head_node_type = Resources(
machine_type="n1-standard-8",
node_count=1,
accelerator_type="NVIDIA_TESLA_K80",
accelerator_count=1,
custom_image="us-docker.pkg.dev/my-project/ray-cpu-image.2.9:latest", # Optional
)
worker_node_types = [Resources(
machine_type="n1-standard-8",
node_count=2,
accelerator_type="NVIDIA_TESLA_K80",
accelerator_count=1,
custom_image="us-docker.pkg.dev/my-project/ray-gpu-image.2.9:latest", # Optional
)]
cluster_resource_name = vertex_ray.create_ray_cluster(
head_node_type=head_node_type,
network="projects/my-project-number/global/networks/my-vpc-name", # Optional
service_account="my-service-account@my-project-number.iam.gserviceaccount.com", # Optional
cluster_name="my-cluster-name", # Optional
worker_node_types=worker_node_types,
ray_version="2.9",
)
After a ray cluster is set up, you can call
`ray.init(f"vertex_ray://{cluster_resource_name}", runtime_env=...)` without
specifying ray cluster address to connect to the cluster. To shut down the
cluster you can call `ray.delete_ray_cluster()`.
Note: If the active ray cluster has not finished shutting down, you cannot
create a new ray cluster with the same cluster_name.
Args:
head_node_type: The head node resource. Resources.node_count must be 1.
If not set, default value of Resources() class will be used.
python_version: Python version for the ray cluster.
ray_version: Ray version for the ray cluster. Default is 2.33.0.
network: Virtual private cloud (VPC) network. For Ray Client, VPC
peering is required to connect to the Ray Cluster managed in the
Vertex API service. For Ray Job API, VPC network is not required
because Ray Cluster connection can be accessed through dashboard
address.
service_account: Service account to be used for running Ray programs on
the cluster.
cluster_name: This value may be up to 63 characters, and valid
characters are `[a-z0-9_-]`. The first character cannot be a number
or hyphen.
worker_node_types: The list of Resources of the worker nodes. The same
Resources object should not appear multiple times in the list.
custom_images: The NodeImages which specifies head node and worker nodes
images. All the workers will share the same image. If each Resource
has a specific custom image, use `Resources.custom_image` for
head/worker_node_type(s). Note that configuring `Resources.custom_image`
will override `custom_images` here. Allowlist only.
enable_metrics_collection: Enable Ray metrics collection for visualization.
enable_logging: Enable exporting Ray logs to Cloud Logging.
psc_interface_config: PSC-I config.
reserved_ip_ranges: A list of names for the reserved IP ranges under
the VPC network that can be used for this cluster. If set, we will
deploy the cluster within the provided IP ranges. Otherwise, the
cluster is deployed to any IP ranges under the provided VPC network.
Example: ["vertex-ai-ip-range"].
labels:
The labels with user-defined metadata to organize Ray cluster.
Label keys and values can be no longer than 64 characters (Unicode
codepoints), can only contain lowercase letters, numeric characters,
underscores and dashes. International characters are allowed.
See https://goo.gl/xmQnxf for more information and examples of labels.
Returns:
The cluster_resource_name of the initiated Ray cluster on Vertex.
Raise:
ValueError: If the cluster is not created successfully.
RuntimeError: If the ray_version is 2.4.
"""
if network is None:
logging.info(
"[Ray on Vertex]: No VPC network configured. It is required for client connection."
)
if ray_version == "2.4":
raise RuntimeError(_V2_4_WARNING_MESSAGE)
if ray_version == "2.9.3":
warnings.warn(_V2_9_WARNING_MESSAGE, DeprecationWarning, stacklevel=1)
local_ray_verion = _validation_utils.get_local_ray_version()
if ray_version != local_ray_verion:
if custom_images is None and head_node_type.custom_image is None:
install_ray_version = "2.33.0"
logging.info(
"[Ray on Vertex]: Local runtime has Ray version %s"
", but the requested cluster runtime has %s. Please "
"ensure that the Ray versions match for client connectivity. You may "
'"pip install --user --force-reinstall ray[default]==%s"'
" and restart runtime before cluster connection."
% (local_ray_verion, ray_version, install_ray_version)
)
else:
logging.info(
"[Ray on Vertex]: Local runtime has Ray version %s."
"Please ensure that the Ray versions match for client connectivity."
% local_ray_verion
)
if cluster_name is None:
cluster_name = "ray-cluster-" + utils.timestamped_unique_name()
if head_node_type:
if head_node_type.node_count != 1:
raise ValueError(
"[Ray on Vertex AI]: For head_node_type, "
+ "Resources.node_count must be 1."
)
if head_node_type.autoscaling_spec is not None:
raise ValueError(
"[Ray on Vertex AI]: For head_node_type, "
+ "Resources.autoscaling_spec must be None."
)
if (
head_node_type.accelerator_type is None
and head_node_type.accelerator_count > 0
):
raise ValueError(
"[Ray on Vertex]: accelerator_type must be specified when"
+ " accelerator_count is set to a value other than 0."
)
resource_pool_images = {}
# head node
resource_pool_0 = ResourcePool()
resource_pool_0.id = "head-node"
resource_pool_0.replica_count = head_node_type.node_count
resource_pool_0.machine_spec.machine_type = head_node_type.machine_type
resource_pool_0.machine_spec.accelerator_count = head_node_type.accelerator_count
resource_pool_0.machine_spec.accelerator_type = head_node_type.accelerator_type
resource_pool_0.disk_spec.boot_disk_type = head_node_type.boot_disk_type
resource_pool_0.disk_spec.boot_disk_size_gb = head_node_type.boot_disk_size_gb
enable_cuda = True if head_node_type.accelerator_count > 0 else False
if head_node_type.custom_image is not None:
image_uri = head_node_type.custom_image
elif custom_images is None:
image_uri = _validation_utils.get_image_uri(
ray_version, python_version, enable_cuda
)
elif custom_images.head is not None and custom_images.worker is not None:
image_uri = custom_images.head
else:
raise ValueError(
"[Ray on Vertex AI]: custom_images.head and custom_images.worker must be specified when custom_images is set."
)
resource_pool_images[resource_pool_0.id] = image_uri
worker_pools = []
i = 0
if worker_node_types:
for worker_node_type in worker_node_types:
if (
worker_node_type.accelerator_type is None
and worker_node_type.accelerator_count > 0
):
raise ValueError(
"[Ray on Vertex]: accelerator_type must be specified when"
+ " accelerator_count is set to a value other than 0."
)
additional_replica_count = resources._check_machine_spec_identical(
head_node_type, worker_node_type
)
if worker_node_type.autoscaling_spec is None:
# Worker and head share the same MachineSpec, merge them into the
# same ResourcePool
resource_pool_0.replica_count = (
resource_pool_0.replica_count + additional_replica_count
)
else:
if additional_replica_count > 0:
# Autoscaling for single ResourcePool (homogeneous cluster).
resource_pool_0.replica_count = None
resource_pool_0.autoscaling_spec.min_replica_count = (
worker_node_type.autoscaling_spec.min_replica_count
)
resource_pool_0.autoscaling_spec.max_replica_count = (
worker_node_type.autoscaling_spec.max_replica_count
)
if additional_replica_count == 0:
resource_pool = ResourcePool()
resource_pool.id = f"worker-pool{i+1}"
if worker_node_type.autoscaling_spec is None:
resource_pool.replica_count = worker_node_type.node_count
else:
# Autoscaling for worker ResourcePool.
resource_pool.autoscaling_spec.min_replica_count = (
worker_node_type.autoscaling_spec.min_replica_count
)
resource_pool.autoscaling_spec.max_replica_count = (
worker_node_type.autoscaling_spec.max_replica_count
)
resource_pool.machine_spec.machine_type = worker_node_type.machine_type
resource_pool.machine_spec.accelerator_count = (
worker_node_type.accelerator_count
)
resource_pool.machine_spec.accelerator_type = (
worker_node_type.accelerator_type
)
resource_pool.disk_spec.boot_disk_type = worker_node_type.boot_disk_type
resource_pool.disk_spec.boot_disk_size_gb = (
worker_node_type.boot_disk_size_gb
)
worker_pools.append(resource_pool)
enable_cuda = True if worker_node_type.accelerator_count > 0 else False
if worker_node_type.custom_image is not None:
image_uri = worker_node_type.custom_image
elif custom_images is None:
image_uri = _validation_utils.get_image_uri(
ray_version, python_version, enable_cuda
)
else:
image_uri = custom_images.worker
resource_pool_images[resource_pool.id] = image_uri
i += 1
resource_pools = [resource_pool_0] + worker_pools
metrics_collection_disabled = not enable_metrics_collection
ray_metric_spec = RayMetricSpec(disabled=metrics_collection_disabled)
logging_disabled = not enable_logging
ray_logs_spec = RayLogsSpec(disabled=logging_disabled)
ray_spec = RaySpec(
resource_pool_images=resource_pool_images,
ray_metric_spec=ray_metric_spec,
ray_logs_spec=ray_logs_spec,
)
if nfs_mounts:
gapic_nfs_mounts = []
for nfs_mount in nfs_mounts:
gapic_nfs_mounts.append(
NfsMount(
server=nfs_mount.server,
path=nfs_mount.path,
mount_point=nfs_mount.mount_point,
)
)
ray_spec.nfs_mounts = gapic_nfs_mounts
if service_account:
service_account_spec = ServiceAccountSpec(
enable_custom_service_account=True,
service_account=service_account,
)
resource_runtime_spec = ResourceRuntimeSpec(
ray_spec=ray_spec,
service_account_spec=service_account_spec,
)
else:
resource_runtime_spec = ResourceRuntimeSpec(ray_spec=ray_spec)
if psc_interface_config:
gapic_psc_interface_config = PscInterfaceConfig(
network_attachment=psc_interface_config.network_attachment,
)
else:
gapic_psc_interface_config = None
persistent_resource = PersistentResource(
resource_pools=resource_pools,
network=network,
labels=labels,
resource_runtime_spec=resource_runtime_spec,
psc_interface_config=gapic_psc_interface_config,
reserved_ip_ranges=reserved_ip_ranges,
)
location = initializer.global_config.location
project_id = initializer.global_config.project
project_number = resource_manager_utils.get_project_number(project_id)
parent = f"projects/{project_number}/locations/{location}"
request = persistent_resource_service.CreatePersistentResourceRequest(
parent=parent,
persistent_resource=persistent_resource,
persistent_resource_id=cluster_name,
)
client = _gapic_utils.create_persistent_resource_client()
try:
_ = client.create_persistent_resource(request)
except Exception as e:
> raise ValueError("Failed in cluster creation due to: ", e) from e
E ValueError: ('Failed in cluster creation due to: ', FailedPrecondition('You have already provisioned the maximum number of PersistentResources in this region. Please switch to a different region or delete one or more PersistentResources in this region before creating another.'))
google/cloud/aiplatform/vertex_ray/cluster_init.py:369: ValueError
_____________ TestClusterManagement.test_cluster_management[2.33] ______________
[gw0] linux -- Python 3.10.15 /tmpfs/src/github/python-aiplatform/.nox/system-3-10/bin/python
self =
cluster_ray_version = '2.33'
@pytest.mark.parametrize("cluster_ray_version", ["2.9", "2.33"])
def test_cluster_management(self, cluster_ray_version):
assert ray.__version__ == RAY_VERSION
aiplatform.init(project=PROJECT_ID, location="us-central1")
# CPU default cluster
head_node_type = vertex_ray.Resources()
worker_node_types = [vertex_ray.Resources()]
timestamp = datetime.datetime.now().strftime("%Y-%m-%d-%H-%M-%S")
> cluster_resource_name = vertex_ray.create_ray_cluster(
head_node_type=head_node_type,
worker_node_types=worker_node_types,
cluster_name=f"ray-cluster-{timestamp}-test-cluster-management",
ray_version=cluster_ray_version,
)
tests/system/vertex_ray/test_cluster_management.py:45:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
google/cloud/aiplatform/vertex_ray/cluster_init.py:373: in create_ray_cluster
response = _gapic_utils.get_persistent_resource(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
persistent_resource_name = 'projects/580378083368/locations/us-central1/persistentResources/ray-cluster-2025-02-27-23-45-50-test-cluster-management'
tolerance = 1
def get_persistent_resource(
persistent_resource_name: str, tolerance: Optional[int] = 0
):
"""Get persistent resource.
Args:
persistent_resource_name:
"projects//locations//persistentResources/".
tolerance: number of attemps to get persistent resource.
Returns:
aiplatform_v1.PersistentResource if state is RUNNING.
Raises:
ValueError: Invalid cluster resource name.
RuntimeError: Service returns error.
RuntimeError: Cluster resource state is STOPPING.
RuntimeError: Cluster resource state is ERROR.
"""
client = create_persistent_resource_client()
request = GetPersistentResourceRequest(name=persistent_resource_name)
# TODO(b/277117901): Add test cases for polling and error handling
num_attempts = 0
while True:
try:
response = client.get_persistent_resource(request)
except exceptions.NotFound:
response = None
if num_attempts >= tolerance:
raise ValueError(
"[Ray on Vertex AI]: Invalid cluster_resource_name (404 not found)."
)
if response:
if response.error.message:
logging.error("[Ray on Vertex AI]: %s" % response.error.message)
> raise RuntimeError("[Ray on Vertex AI]: Cluster returned an error.")
E RuntimeError: [Ray on Vertex AI]: Cluster returned an error.
google/cloud/aiplatform/vertex_ray/util/_gapic_utils.py:115: RuntimeError
----------------------------- Captured stdout call -----------------------------
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 1; sleeping for 0:02:30 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 2; sleeping for 0:01:54.750000 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 3; sleeping for 0:01:27.783750 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 4; sleeping for 0:01:07.154569 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 5; sleeping for 0:00:51.373245 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 6; sleeping for 0:00:39.300532 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 7; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 8; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 9; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 10; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 11; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 12; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 13; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 14; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 15; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 16; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 17; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 18; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 19; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 20; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 21; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 22; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 23; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 24; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 25; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 26; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 27; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 28; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 29; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 30; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 31; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 32; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 33; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 34; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 35; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 36; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 37; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 38; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 39; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 40; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 41; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 42; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 43; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 44; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 45; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 46; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 47; sleeping for 0:00:30.064907 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 48; sleeping for 0:00:30.064907 seconds
------------------------------ Captured log call -------------------------------
ERROR root:_gapic_utils.py:114 [Ray on Vertex AI]: An internal error occurred on your cluster. Please try recreating one in a few minutes. If you still experience errors, contact Cloud AI Platform.
________ TestJobSubmissionDashboard.test_job_submission_dashboard[2.9] _________
[gw0] linux -- Python 3.10.15 /tmpfs/src/github/python-aiplatform/.nox/system-3-10/bin/python
args = (parent: "projects/580378083368/locations/us-central1"
persistent_resource {
resource_pools {
id: "head-node"
...s_spec {
}
}
}
}
persistent_resource_id: "ray-cluster-2025-02-28-00-15-34-test-job-submission-dashboard"
,)
kwargs = {'metadata': [('x-goog-request-params', 'parent=projects/580378083368/locations/us-central1'), ('x-goog-api-client', '.../1.82.0+vertex_ray+top_google_constructor_method+google.cloud.aiplatform.vertex_ray.cluster_init.create_ray_cluster')]}
@functools.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
try:
> return callable_(*args, **kwargs)
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/grpc_helpers.py:76:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
.nox/system-3-10/lib/python3.10/site-packages/grpc/_interceptor.py:247: in __call__
response, ignored_call = self._with_call(request,
.nox/system-3-10/lib/python3.10/site-packages/grpc/_interceptor.py:290: in _with_call
return call.result(), call
.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:343: in result
raise self
.nox/system-3-10/lib/python3.10/site-packages/grpc/_interceptor.py:274: in continuation
response, call = self._thunk(new_method).with_call(
.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:957: in with_call
return _end_unary_response_blocking(state, call, True, None)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
state =
call =
with_call = True, deadline = None
def _end_unary_response_blocking(state, call, with_call, deadline):
if state.code is grpc.StatusCode.OK:
if with_call:
rendezvous = _MultiThreadedRendezvous(state, call, None, deadline)
return state.response, rendezvous
else:
return state.response
else:
> raise _InactiveRpcError(state)
E grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
E status = StatusCode.FAILED_PRECONDITION
E details = "You have already provisioned the maximum number of PersistentResources in this region. Please switch to a different region or delete one or more PersistentResources in this region before creating another."
E debug_error_string = "UNKNOWN:Error received from peer ipv4:142.251.188.95:443 {grpc_message:"You have already provisioned the maximum number of PersistentResources in this region. Please switch to a different region or delete one or more PersistentResources in this region before creating another.", grpc_status:9, created_time:"2025-02-28T00:15:34.576884676+00:00"}"
E >
.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:849: _InactiveRpcError
The above exception was the direct cause of the following exception:
head_node_type = Resources(machine_type='n1-standard-16', node_count=1, accelerator_type=None, accelerator_count=0, boot_disk_type='pd-ssd', boot_disk_size_gb=100, custom_image=None, autoscaling_spec=None)
python_version = '3.10', ray_version = '2.9', network = None
service_account = None
cluster_name = 'ray-cluster-2025-02-28-00-15-34-test-job-submission-dashboard'
worker_node_types = [Resources(machine_type='n1-standard-16', node_count=1, accelerator_type=None, accelerator_count=0, boot_disk_type='pd-ssd', boot_disk_size_gb=100, custom_image=None, autoscaling_spec=None)]
custom_images = None, enable_metrics_collection = True, enable_logging = True
psc_interface_config = None, reserved_ip_ranges = None, nfs_mounts = None
labels = None
def create_ray_cluster(
head_node_type: Optional[resources.Resources] = resources.Resources(),
python_version: Optional[str] = "3.10",
ray_version: Optional[str] = "2.33",
network: Optional[str] = None,
service_account: Optional[str] = None,
cluster_name: Optional[str] = None,
worker_node_types: Optional[List[resources.Resources]] = [resources.Resources()],
custom_images: Optional[resources.NodeImages] = None,
enable_metrics_collection: Optional[bool] = True,
enable_logging: Optional[bool] = True,
psc_interface_config: Optional[resources.PscIConfig] = None,
reserved_ip_ranges: Optional[List[str]] = None,
nfs_mounts: Optional[List[resources.NfsMount]] = None,
labels: Optional[Dict[str, str]] = None,
) -> str:
"""Create a ray cluster on the Vertex AI.
Sample usage:
from vertex_ray import Resources
head_node_type = Resources(
machine_type="n1-standard-8",
node_count=1,
accelerator_type="NVIDIA_TESLA_K80",
accelerator_count=1,
custom_image="us-docker.pkg.dev/my-project/ray-cpu-image.2.9:latest", # Optional
)
worker_node_types = [Resources(
machine_type="n1-standard-8",
node_count=2,
accelerator_type="NVIDIA_TESLA_K80",
accelerator_count=1,
custom_image="us-docker.pkg.dev/my-project/ray-gpu-image.2.9:latest", # Optional
)]
cluster_resource_name = vertex_ray.create_ray_cluster(
head_node_type=head_node_type,
network="projects/my-project-number/global/networks/my-vpc-name", # Optional
service_account="my-service-account@my-project-number.iam.gserviceaccount.com", # Optional
cluster_name="my-cluster-name", # Optional
worker_node_types=worker_node_types,
ray_version="2.9",
)
After a ray cluster is set up, you can call
`ray.init(f"vertex_ray://{cluster_resource_name}", runtime_env=...)` without
specifying ray cluster address to connect to the cluster. To shut down the
cluster you can call `ray.delete_ray_cluster()`.
Note: If the active ray cluster has not finished shutting down, you cannot
create a new ray cluster with the same cluster_name.
Args:
head_node_type: The head node resource. Resources.node_count must be 1.
If not set, default value of Resources() class will be used.
python_version: Python version for the ray cluster.
ray_version: Ray version for the ray cluster. Default is 2.33.0.
network: Virtual private cloud (VPC) network. For Ray Client, VPC
peering is required to connect to the Ray Cluster managed in the
Vertex API service. For Ray Job API, VPC network is not required
because Ray Cluster connection can be accessed through dashboard
address.
service_account: Service account to be used for running Ray programs on
the cluster.
cluster_name: This value may be up to 63 characters, and valid
characters are `[a-z0-9_-]`. The first character cannot be a number
or hyphen.
worker_node_types: The list of Resources of the worker nodes. The same
Resources object should not appear multiple times in the list.
custom_images: The NodeImages which specifies head node and worker nodes
images. All the workers will share the same image. If each Resource
has a specific custom image, use `Resources.custom_image` for
head/worker_node_type(s). Note that configuring `Resources.custom_image`
will override `custom_images` here. Allowlist only.
enable_metrics_collection: Enable Ray metrics collection for visualization.
enable_logging: Enable exporting Ray logs to Cloud Logging.
psc_interface_config: PSC-I config.
reserved_ip_ranges: A list of names for the reserved IP ranges under
the VPC network that can be used for this cluster. If set, we will
deploy the cluster within the provided IP ranges. Otherwise, the
cluster is deployed to any IP ranges under the provided VPC network.
Example: ["vertex-ai-ip-range"].
labels:
The labels with user-defined metadata to organize Ray cluster.
Label keys and values can be no longer than 64 characters (Unicode
codepoints), can only contain lowercase letters, numeric characters,
underscores and dashes. International characters are allowed.
See https://goo.gl/xmQnxf for more information and examples of labels.
Returns:
The cluster_resource_name of the initiated Ray cluster on Vertex.
Raise:
ValueError: If the cluster is not created successfully.
RuntimeError: If the ray_version is 2.4.
"""
if network is None:
logging.info(
"[Ray on Vertex]: No VPC network configured. It is required for client connection."
)
if ray_version == "2.4":
raise RuntimeError(_V2_4_WARNING_MESSAGE)
if ray_version == "2.9.3":
warnings.warn(_V2_9_WARNING_MESSAGE, DeprecationWarning, stacklevel=1)
local_ray_verion = _validation_utils.get_local_ray_version()
if ray_version != local_ray_verion:
if custom_images is None and head_node_type.custom_image is None:
install_ray_version = "2.33.0"
logging.info(
"[Ray on Vertex]: Local runtime has Ray version %s"
", but the requested cluster runtime has %s. Please "
"ensure that the Ray versions match for client connectivity. You may "
'"pip install --user --force-reinstall ray[default]==%s"'
" and restart runtime before cluster connection."
% (local_ray_verion, ray_version, install_ray_version)
)
else:
logging.info(
"[Ray on Vertex]: Local runtime has Ray version %s."
"Please ensure that the Ray versions match for client connectivity."
% local_ray_verion
)
if cluster_name is None:
cluster_name = "ray-cluster-" + utils.timestamped_unique_name()
if head_node_type:
if head_node_type.node_count != 1:
raise ValueError(
"[Ray on Vertex AI]: For head_node_type, "
+ "Resources.node_count must be 1."
)
if head_node_type.autoscaling_spec is not None:
raise ValueError(
"[Ray on Vertex AI]: For head_node_type, "
+ "Resources.autoscaling_spec must be None."
)
if (
head_node_type.accelerator_type is None
and head_node_type.accelerator_count > 0
):
raise ValueError(
"[Ray on Vertex]: accelerator_type must be specified when"
+ " accelerator_count is set to a value other than 0."
)
resource_pool_images = {}
# head node
resource_pool_0 = ResourcePool()
resource_pool_0.id = "head-node"
resource_pool_0.replica_count = head_node_type.node_count
resource_pool_0.machine_spec.machine_type = head_node_type.machine_type
resource_pool_0.machine_spec.accelerator_count = head_node_type.accelerator_count
resource_pool_0.machine_spec.accelerator_type = head_node_type.accelerator_type
resource_pool_0.disk_spec.boot_disk_type = head_node_type.boot_disk_type
resource_pool_0.disk_spec.boot_disk_size_gb = head_node_type.boot_disk_size_gb
enable_cuda = True if head_node_type.accelerator_count > 0 else False
if head_node_type.custom_image is not None:
image_uri = head_node_type.custom_image
elif custom_images is None:
image_uri = _validation_utils.get_image_uri(
ray_version, python_version, enable_cuda
)
elif custom_images.head is not None and custom_images.worker is not None:
image_uri = custom_images.head
else:
raise ValueError(
"[Ray on Vertex AI]: custom_images.head and custom_images.worker must be specified when custom_images is set."
)
resource_pool_images[resource_pool_0.id] = image_uri
worker_pools = []
i = 0
if worker_node_types:
for worker_node_type in worker_node_types:
if (
worker_node_type.accelerator_type is None
and worker_node_type.accelerator_count > 0
):
raise ValueError(
"[Ray on Vertex]: accelerator_type must be specified when"
+ " accelerator_count is set to a value other than 0."
)
additional_replica_count = resources._check_machine_spec_identical(
head_node_type, worker_node_type
)
if worker_node_type.autoscaling_spec is None:
# Worker and head share the same MachineSpec, merge them into the
# same ResourcePool
resource_pool_0.replica_count = (
resource_pool_0.replica_count + additional_replica_count
)
else:
if additional_replica_count > 0:
# Autoscaling for single ResourcePool (homogeneous cluster).
resource_pool_0.replica_count = None
resource_pool_0.autoscaling_spec.min_replica_count = (
worker_node_type.autoscaling_spec.min_replica_count
)
resource_pool_0.autoscaling_spec.max_replica_count = (
worker_node_type.autoscaling_spec.max_replica_count
)
if additional_replica_count == 0:
resource_pool = ResourcePool()
resource_pool.id = f"worker-pool{i+1}"
if worker_node_type.autoscaling_spec is None:
resource_pool.replica_count = worker_node_type.node_count
else:
# Autoscaling for worker ResourcePool.
resource_pool.autoscaling_spec.min_replica_count = (
worker_node_type.autoscaling_spec.min_replica_count
)
resource_pool.autoscaling_spec.max_replica_count = (
worker_node_type.autoscaling_spec.max_replica_count
)
resource_pool.machine_spec.machine_type = worker_node_type.machine_type
resource_pool.machine_spec.accelerator_count = (
worker_node_type.accelerator_count
)
resource_pool.machine_spec.accelerator_type = (
worker_node_type.accelerator_type
)
resource_pool.disk_spec.boot_disk_type = worker_node_type.boot_disk_type
resource_pool.disk_spec.boot_disk_size_gb = (
worker_node_type.boot_disk_size_gb
)
worker_pools.append(resource_pool)
enable_cuda = True if worker_node_type.accelerator_count > 0 else False
if worker_node_type.custom_image is not None:
image_uri = worker_node_type.custom_image
elif custom_images is None:
image_uri = _validation_utils.get_image_uri(
ray_version, python_version, enable_cuda
)
else:
image_uri = custom_images.worker
resource_pool_images[resource_pool.id] = image_uri
i += 1
resource_pools = [resource_pool_0] + worker_pools
metrics_collection_disabled = not enable_metrics_collection
ray_metric_spec = RayMetricSpec(disabled=metrics_collection_disabled)
logging_disabled = not enable_logging
ray_logs_spec = RayLogsSpec(disabled=logging_disabled)
ray_spec = RaySpec(
resource_pool_images=resource_pool_images,
ray_metric_spec=ray_metric_spec,
ray_logs_spec=ray_logs_spec,
)
if nfs_mounts:
gapic_nfs_mounts = []
for nfs_mount in nfs_mounts:
gapic_nfs_mounts.append(
NfsMount(
server=nfs_mount.server,
path=nfs_mount.path,
mount_point=nfs_mount.mount_point,
)
)
ray_spec.nfs_mounts = gapic_nfs_mounts
if service_account:
service_account_spec = ServiceAccountSpec(
enable_custom_service_account=True,
service_account=service_account,
)
resource_runtime_spec = ResourceRuntimeSpec(
ray_spec=ray_spec,
service_account_spec=service_account_spec,
)
else:
resource_runtime_spec = ResourceRuntimeSpec(ray_spec=ray_spec)
if psc_interface_config:
gapic_psc_interface_config = PscInterfaceConfig(
network_attachment=psc_interface_config.network_attachment,
)
else:
gapic_psc_interface_config = None
persistent_resource = PersistentResource(
resource_pools=resource_pools,
network=network,
labels=labels,
resource_runtime_spec=resource_runtime_spec,
psc_interface_config=gapic_psc_interface_config,
reserved_ip_ranges=reserved_ip_ranges,
)
location = initializer.global_config.location
project_id = initializer.global_config.project
project_number = resource_manager_utils.get_project_number(project_id)
parent = f"projects/{project_number}/locations/{location}"
request = persistent_resource_service.CreatePersistentResourceRequest(
parent=parent,
persistent_resource=persistent_resource,
persistent_resource_id=cluster_name,
)
client = _gapic_utils.create_persistent_resource_client()
try:
> _ = client.create_persistent_resource(request)
google/cloud/aiplatform/vertex_ray/cluster_init.py:367:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
google/cloud/aiplatform_v1beta1/services/persistent_resource_service/client.py:1006: in create_persistent_resource
response = rpc(
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/gapic_v1/method.py:131: in __call__
return wrapped_func(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
args = (parent: "projects/580378083368/locations/us-central1"
persistent_resource {
resource_pools {
id: "head-node"
...s_spec {
}
}
}
}
persistent_resource_id: "ray-cluster-2025-02-28-00-15-34-test-job-submission-dashboard"
,)
kwargs = {'metadata': [('x-goog-request-params', 'parent=projects/580378083368/locations/us-central1'), ('x-goog-api-client', '.../1.82.0+vertex_ray+top_google_constructor_method+google.cloud.aiplatform.vertex_ray.cluster_init.create_ray_cluster')]}
@functools.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
try:
return callable_(*args, **kwargs)
except grpc.RpcError as exc:
> raise exceptions.from_grpc_error(exc) from exc
E google.api_core.exceptions.FailedPrecondition: 400 You have already provisioned the maximum number of PersistentResources in this region. Please switch to a different region or delete one or more PersistentResources in this region before creating another.
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/grpc_helpers.py:78: FailedPrecondition
The above exception was the direct cause of the following exception:
self =
cluster_ray_version = '2.9'
@pytest.mark.parametrize("cluster_ray_version", ["2.9", "2.33"])
def test_job_submission_dashboard(self, cluster_ray_version):
assert ray.__version__ == RAY_VERSION
aiplatform.init(project=PROJECT_ID, location="us-central1")
head_node_type = vertex_ray.Resources()
worker_node_types = [vertex_ray.Resources()]
timestamp = datetime.datetime.now().strftime("%Y-%m-%d-%H-%M-%S")
# Create cluster, get dashboard address
> cluster_resource_name = vertex_ray.create_ray_cluster(
head_node_type=head_node_type,
worker_node_types=worker_node_types,
cluster_name=f"ray-cluster-{timestamp}-test-job-submission-dashboard",
ray_version=cluster_ray_version,
)
tests/system/vertex_ray/test_job_submission_dashboard.py:49:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
head_node_type = Resources(machine_type='n1-standard-16', node_count=1, accelerator_type=None, accelerator_count=0, boot_disk_type='pd-ssd', boot_disk_size_gb=100, custom_image=None, autoscaling_spec=None)
python_version = '3.10', ray_version = '2.9', network = None
service_account = None
cluster_name = 'ray-cluster-2025-02-28-00-15-34-test-job-submission-dashboard'
worker_node_types = [Resources(machine_type='n1-standard-16', node_count=1, accelerator_type=None, accelerator_count=0, boot_disk_type='pd-ssd', boot_disk_size_gb=100, custom_image=None, autoscaling_spec=None)]
custom_images = None, enable_metrics_collection = True, enable_logging = True
psc_interface_config = None, reserved_ip_ranges = None, nfs_mounts = None
labels = None
def create_ray_cluster(
head_node_type: Optional[resources.Resources] = resources.Resources(),
python_version: Optional[str] = "3.10",
ray_version: Optional[str] = "2.33",
network: Optional[str] = None,
service_account: Optional[str] = None,
cluster_name: Optional[str] = None,
worker_node_types: Optional[List[resources.Resources]] = [resources.Resources()],
custom_images: Optional[resources.NodeImages] = None,
enable_metrics_collection: Optional[bool] = True,
enable_logging: Optional[bool] = True,
psc_interface_config: Optional[resources.PscIConfig] = None,
reserved_ip_ranges: Optional[List[str]] = None,
nfs_mounts: Optional[List[resources.NfsMount]] = None,
labels: Optional[Dict[str, str]] = None,
) -> str:
"""Create a ray cluster on the Vertex AI.
Sample usage:
from vertex_ray import Resources
head_node_type = Resources(
machine_type="n1-standard-8",
node_count=1,
accelerator_type="NVIDIA_TESLA_K80",
accelerator_count=1,
custom_image="us-docker.pkg.dev/my-project/ray-cpu-image.2.9:latest", # Optional
)
worker_node_types = [Resources(
machine_type="n1-standard-8",
node_count=2,
accelerator_type="NVIDIA_TESLA_K80",
accelerator_count=1,
custom_image="us-docker.pkg.dev/my-project/ray-gpu-image.2.9:latest", # Optional
)]
cluster_resource_name = vertex_ray.create_ray_cluster(
head_node_type=head_node_type,
network="projects/my-project-number/global/networks/my-vpc-name", # Optional
service_account="my-service-account@my-project-number.iam.gserviceaccount.com", # Optional
cluster_name="my-cluster-name", # Optional
worker_node_types=worker_node_types,
ray_version="2.9",
)
After a ray cluster is set up, you can call
`ray.init(f"vertex_ray://{cluster_resource_name}", runtime_env=...)` without
specifying ray cluster address to connect to the cluster. To shut down the
cluster you can call `ray.delete_ray_cluster()`.
Note: If the active ray cluster has not finished shutting down, you cannot
create a new ray cluster with the same cluster_name.
Args:
head_node_type: The head node resource. Resources.node_count must be 1.
If not set, default value of Resources() class will be used.
python_version: Python version for the ray cluster.
ray_version: Ray version for the ray cluster. Default is 2.33.0.
network: Virtual private cloud (VPC) network. For Ray Client, VPC
peering is required to connect to the Ray Cluster managed in the
Vertex API service. For Ray Job API, VPC network is not required
because Ray Cluster connection can be accessed through dashboard
address.
service_account: Service account to be used for running Ray programs on
the cluster.
cluster_name: This value may be up to 63 characters, and valid
characters are `[a-z0-9_-]`. The first character cannot be a number
or hyphen.
worker_node_types: The list of Resources of the worker nodes. The same
Resources object should not appear multiple times in the list.
custom_images: The NodeImages which specifies head node and worker nodes
images. All the workers will share the same image. If each Resource
has a specific custom image, use `Resources.custom_image` for
head/worker_node_type(s). Note that configuring `Resources.custom_image`
will override `custom_images` here. Allowlist only.
enable_metrics_collection: Enable Ray metrics collection for visualization.
enable_logging: Enable exporting Ray logs to Cloud Logging.
psc_interface_config: PSC-I config.
reserved_ip_ranges: A list of names for the reserved IP ranges under
the VPC network that can be used for this cluster. If set, we will
deploy the cluster within the provided IP ranges. Otherwise, the
cluster is deployed to any IP ranges under the provided VPC network.
Example: ["vertex-ai-ip-range"].
labels:
The labels with user-defined metadata to organize Ray cluster.
Label keys and values can be no longer than 64 characters (Unicode
codepoints), can only contain lowercase letters, numeric characters,
underscores and dashes. International characters are allowed.
See https://goo.gl/xmQnxf for more information and examples of labels.
Returns:
The cluster_resource_name of the initiated Ray cluster on Vertex.
Raise:
ValueError: If the cluster is not created successfully.
RuntimeError: If the ray_version is 2.4.
"""
if network is None:
logging.info(
"[Ray on Vertex]: No VPC network configured. It is required for client connection."
)
if ray_version == "2.4":
raise RuntimeError(_V2_4_WARNING_MESSAGE)
if ray_version == "2.9.3":
warnings.warn(_V2_9_WARNING_MESSAGE, DeprecationWarning, stacklevel=1)
local_ray_verion = _validation_utils.get_local_ray_version()
if ray_version != local_ray_verion:
if custom_images is None and head_node_type.custom_image is None:
install_ray_version = "2.33.0"
logging.info(
"[Ray on Vertex]: Local runtime has Ray version %s"
", but the requested cluster runtime has %s. Please "
"ensure that the Ray versions match for client connectivity. You may "
'"pip install --user --force-reinstall ray[default]==%s"'
" and restart runtime before cluster connection."
% (local_ray_verion, ray_version, install_ray_version)
)
else:
logging.info(
"[Ray on Vertex]: Local runtime has Ray version %s."
"Please ensure that the Ray versions match for client connectivity."
% local_ray_verion
)
if cluster_name is None:
cluster_name = "ray-cluster-" + utils.timestamped_unique_name()
if head_node_type:
if head_node_type.node_count != 1:
raise ValueError(
"[Ray on Vertex AI]: For head_node_type, "
+ "Resources.node_count must be 1."
)
if head_node_type.autoscaling_spec is not None:
raise ValueError(
"[Ray on Vertex AI]: For head_node_type, "
+ "Resources.autoscaling_spec must be None."
)
if (
head_node_type.accelerator_type is None
and head_node_type.accelerator_count > 0
):
raise ValueError(
"[Ray on Vertex]: accelerator_type must be specified when"
+ " accelerator_count is set to a value other than 0."
)
resource_pool_images = {}
# head node
resource_pool_0 = ResourcePool()
resource_pool_0.id = "head-node"
resource_pool_0.replica_count = head_node_type.node_count
resource_pool_0.machine_spec.machine_type = head_node_type.machine_type
resource_pool_0.machine_spec.accelerator_count = head_node_type.accelerator_count
resource_pool_0.machine_spec.accelerator_type = head_node_type.accelerator_type
resource_pool_0.disk_spec.boot_disk_type = head_node_type.boot_disk_type
resource_pool_0.disk_spec.boot_disk_size_gb = head_node_type.boot_disk_size_gb
enable_cuda = True if head_node_type.accelerator_count > 0 else False
if head_node_type.custom_image is not None:
image_uri = head_node_type.custom_image
elif custom_images is None:
image_uri = _validation_utils.get_image_uri(
ray_version, python_version, enable_cuda
)
elif custom_images.head is not None and custom_images.worker is not None:
image_uri = custom_images.head
else:
raise ValueError(
"[Ray on Vertex AI]: custom_images.head and custom_images.worker must be specified when custom_images is set."
)
resource_pool_images[resource_pool_0.id] = image_uri
worker_pools = []
i = 0
if worker_node_types:
for worker_node_type in worker_node_types:
if (
worker_node_type.accelerator_type is None
and worker_node_type.accelerator_count > 0
):
raise ValueError(
"[Ray on Vertex]: accelerator_type must be specified when"
+ " accelerator_count is set to a value other than 0."
)
additional_replica_count = resources._check_machine_spec_identical(
head_node_type, worker_node_type
)
if worker_node_type.autoscaling_spec is None:
# Worker and head share the same MachineSpec, merge them into the
# same ResourcePool
resource_pool_0.replica_count = (
resource_pool_0.replica_count + additional_replica_count
)
else:
if additional_replica_count > 0:
# Autoscaling for single ResourcePool (homogeneous cluster).
resource_pool_0.replica_count = None
resource_pool_0.autoscaling_spec.min_replica_count = (
worker_node_type.autoscaling_spec.min_replica_count
)
resource_pool_0.autoscaling_spec.max_replica_count = (
worker_node_type.autoscaling_spec.max_replica_count
)
if additional_replica_count == 0:
resource_pool = ResourcePool()
resource_pool.id = f"worker-pool{i+1}"
if worker_node_type.autoscaling_spec is None:
resource_pool.replica_count = worker_node_type.node_count
else:
# Autoscaling for worker ResourcePool.
resource_pool.autoscaling_spec.min_replica_count = (
worker_node_type.autoscaling_spec.min_replica_count
)
resource_pool.autoscaling_spec.max_replica_count = (
worker_node_type.autoscaling_spec.max_replica_count
)
resource_pool.machine_spec.machine_type = worker_node_type.machine_type
resource_pool.machine_spec.accelerator_count = (
worker_node_type.accelerator_count
)
resource_pool.machine_spec.accelerator_type = (
worker_node_type.accelerator_type
)
resource_pool.disk_spec.boot_disk_type = worker_node_type.boot_disk_type
resource_pool.disk_spec.boot_disk_size_gb = (
worker_node_type.boot_disk_size_gb
)
worker_pools.append(resource_pool)
enable_cuda = True if worker_node_type.accelerator_count > 0 else False
if worker_node_type.custom_image is not None:
image_uri = worker_node_type.custom_image
elif custom_images is None:
image_uri = _validation_utils.get_image_uri(
ray_version, python_version, enable_cuda
)
else:
image_uri = custom_images.worker
resource_pool_images[resource_pool.id] = image_uri
i += 1
resource_pools = [resource_pool_0] + worker_pools
metrics_collection_disabled = not enable_metrics_collection
ray_metric_spec = RayMetricSpec(disabled=metrics_collection_disabled)
logging_disabled = not enable_logging
ray_logs_spec = RayLogsSpec(disabled=logging_disabled)
ray_spec = RaySpec(
resource_pool_images=resource_pool_images,
ray_metric_spec=ray_metric_spec,
ray_logs_spec=ray_logs_spec,
)
if nfs_mounts:
gapic_nfs_mounts = []
for nfs_mount in nfs_mounts:
gapic_nfs_mounts.append(
NfsMount(
server=nfs_mount.server,
path=nfs_mount.path,
mount_point=nfs_mount.mount_point,
)
)
ray_spec.nfs_mounts = gapic_nfs_mounts
if service_account:
service_account_spec = ServiceAccountSpec(
enable_custom_service_account=True,
service_account=service_account,
)
resource_runtime_spec = ResourceRuntimeSpec(
ray_spec=ray_spec,
service_account_spec=service_account_spec,
)
else:
resource_runtime_spec = ResourceRuntimeSpec(ray_spec=ray_spec)
if psc_interface_config:
gapic_psc_interface_config = PscInterfaceConfig(
network_attachment=psc_interface_config.network_attachment,
)
else:
gapic_psc_interface_config = None
persistent_resource = PersistentResource(
resource_pools=resource_pools,
network=network,
labels=labels,
resource_runtime_spec=resource_runtime_spec,
psc_interface_config=gapic_psc_interface_config,
reserved_ip_ranges=reserved_ip_ranges,
)
location = initializer.global_config.location
project_id = initializer.global_config.project
project_number = resource_manager_utils.get_project_number(project_id)
parent = f"projects/{project_number}/locations/{location}"
request = persistent_resource_service.CreatePersistentResourceRequest(
parent=parent,
persistent_resource=persistent_resource,
persistent_resource_id=cluster_name,
)
client = _gapic_utils.create_persistent_resource_client()
try:
_ = client.create_persistent_resource(request)
except Exception as e:
> raise ValueError("Failed in cluster creation due to: ", e) from e
E ValueError: ('Failed in cluster creation due to: ', FailedPrecondition('You have already provisioned the maximum number of PersistentResources in this region. Please switch to a different region or delete one or more PersistentResources in this region before creating another.'))
google/cloud/aiplatform/vertex_ray/cluster_init.py:369: ValueError
________ TestJobSubmissionDashboard.test_job_submission_dashboard[2.33] ________
[gw0] linux -- Python 3.10.15 /tmpfs/src/github/python-aiplatform/.nox/system-3-10/bin/python
args = (parent: "projects/580378083368/locations/us-central1"
persistent_resource {
resource_pools {
id: "head-node"
...s_spec {
}
}
}
}
persistent_resource_id: "ray-cluster-2025-02-28-00-15-34-test-job-submission-dashboard"
,)
kwargs = {'metadata': [('x-goog-request-params', 'parent=projects/580378083368/locations/us-central1'), ('x-goog-api-client', '.../1.82.0+vertex_ray+top_google_constructor_method+google.cloud.aiplatform.vertex_ray.cluster_init.create_ray_cluster')]}
@functools.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
try:
> return callable_(*args, **kwargs)
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/grpc_helpers.py:76:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
.nox/system-3-10/lib/python3.10/site-packages/grpc/_interceptor.py:247: in __call__
response, ignored_call = self._with_call(request,
.nox/system-3-10/lib/python3.10/site-packages/grpc/_interceptor.py:290: in _with_call
return call.result(), call
.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:343: in result
raise self
.nox/system-3-10/lib/python3.10/site-packages/grpc/_interceptor.py:274: in continuation
response, call = self._thunk(new_method).with_call(
.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:957: in with_call
return _end_unary_response_blocking(state, call, True, None)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
state =
call =
with_call = True, deadline = None
def _end_unary_response_blocking(state, call, with_call, deadline):
if state.code is grpc.StatusCode.OK:
if with_call:
rendezvous = _MultiThreadedRendezvous(state, call, None, deadline)
return state.response, rendezvous
else:
return state.response
else:
> raise _InactiveRpcError(state)
E grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
E status = StatusCode.FAILED_PRECONDITION
E details = "You have already provisioned the maximum number of PersistentResources in this region. Please switch to a different region or delete one or more PersistentResources in this region before creating another."
E debug_error_string = "UNKNOWN:Error received from peer ipv4:142.251.188.95:443 {created_time:"2025-02-28T00:15:35.033907565+00:00", grpc_status:9, grpc_message:"You have already provisioned the maximum number of PersistentResources in this region. Please switch to a different region or delete one or more PersistentResources in this region before creating another."}"
E >
.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:849: _InactiveRpcError
The above exception was the direct cause of the following exception:
head_node_type = Resources(machine_type='n1-standard-16', node_count=1, accelerator_type=None, accelerator_count=0, boot_disk_type='pd-ssd', boot_disk_size_gb=100, custom_image=None, autoscaling_spec=None)
python_version = '3.10', ray_version = '2.33', network = None
service_account = None
cluster_name = 'ray-cluster-2025-02-28-00-15-34-test-job-submission-dashboard'
worker_node_types = [Resources(machine_type='n1-standard-16', node_count=1, accelerator_type=None, accelerator_count=0, boot_disk_type='pd-ssd', boot_disk_size_gb=100, custom_image=None, autoscaling_spec=None)]
custom_images = None, enable_metrics_collection = True, enable_logging = True
psc_interface_config = None, reserved_ip_ranges = None, nfs_mounts = None
labels = None
def create_ray_cluster(
head_node_type: Optional[resources.Resources] = resources.Resources(),
python_version: Optional[str] = "3.10",
ray_version: Optional[str] = "2.33",
network: Optional[str] = None,
service_account: Optional[str] = None,
cluster_name: Optional[str] = None,
worker_node_types: Optional[List[resources.Resources]] = [resources.Resources()],
custom_images: Optional[resources.NodeImages] = None,
enable_metrics_collection: Optional[bool] = True,
enable_logging: Optional[bool] = True,
psc_interface_config: Optional[resources.PscIConfig] = None,
reserved_ip_ranges: Optional[List[str]] = None,
nfs_mounts: Optional[List[resources.NfsMount]] = None,
labels: Optional[Dict[str, str]] = None,
) -> str:
"""Create a ray cluster on the Vertex AI.
Sample usage:
from vertex_ray import Resources
head_node_type = Resources(
machine_type="n1-standard-8",
node_count=1,
accelerator_type="NVIDIA_TESLA_K80",
accelerator_count=1,
custom_image="us-docker.pkg.dev/my-project/ray-cpu-image.2.9:latest", # Optional
)
worker_node_types = [Resources(
machine_type="n1-standard-8",
node_count=2,
accelerator_type="NVIDIA_TESLA_K80",
accelerator_count=1,
custom_image="us-docker.pkg.dev/my-project/ray-gpu-image.2.9:latest", # Optional
)]
cluster_resource_name = vertex_ray.create_ray_cluster(
head_node_type=head_node_type,
network="projects/my-project-number/global/networks/my-vpc-name", # Optional
service_account="my-service-account@my-project-number.iam.gserviceaccount.com", # Optional
cluster_name="my-cluster-name", # Optional
worker_node_types=worker_node_types,
ray_version="2.9",
)
After a ray cluster is set up, you can call
`ray.init(f"vertex_ray://{cluster_resource_name}", runtime_env=...)` without
specifying ray cluster address to connect to the cluster. To shut down the
cluster you can call `ray.delete_ray_cluster()`.
Note: If the active ray cluster has not finished shutting down, you cannot
create a new ray cluster with the same cluster_name.
Args:
head_node_type: The head node resource. Resources.node_count must be 1.
If not set, default value of Resources() class will be used.
python_version: Python version for the ray cluster.
ray_version: Ray version for the ray cluster. Default is 2.33.0.
network: Virtual private cloud (VPC) network. For Ray Client, VPC
peering is required to connect to the Ray Cluster managed in the
Vertex API service. For Ray Job API, VPC network is not required
because Ray Cluster connection can be accessed through dashboard
address.
service_account: Service account to be used for running Ray programs on
the cluster.
cluster_name: This value may be up to 63 characters, and valid
characters are `[a-z0-9_-]`. The first character cannot be a number
or hyphen.
worker_node_types: The list of Resources of the worker nodes. The same
Resources object should not appear multiple times in the list.
custom_images: The NodeImages which specifies head node and worker nodes
images. All the workers will share the same image. If each Resource
has a specific custom image, use `Resources.custom_image` for
head/worker_node_type(s). Note that configuring `Resources.custom_image`
will override `custom_images` here. Allowlist only.
enable_metrics_collection: Enable Ray metrics collection for visualization.
enable_logging: Enable exporting Ray logs to Cloud Logging.
psc_interface_config: PSC-I config.
reserved_ip_ranges: A list of names for the reserved IP ranges under
the VPC network that can be used for this cluster. If set, we will
deploy the cluster within the provided IP ranges. Otherwise, the
cluster is deployed to any IP ranges under the provided VPC network.
Example: ["vertex-ai-ip-range"].
labels:
The labels with user-defined metadata to organize Ray cluster.
Label keys and values can be no longer than 64 characters (Unicode
codepoints), can only contain lowercase letters, numeric characters,
underscores and dashes. International characters are allowed.
See https://goo.gl/xmQnxf for more information and examples of labels.
Returns:
The cluster_resource_name of the initiated Ray cluster on Vertex.
Raise:
ValueError: If the cluster is not created successfully.
RuntimeError: If the ray_version is 2.4.
"""
if network is None:
logging.info(
"[Ray on Vertex]: No VPC network configured. It is required for client connection."
)
if ray_version == "2.4":
raise RuntimeError(_V2_4_WARNING_MESSAGE)
if ray_version == "2.9.3":
warnings.warn(_V2_9_WARNING_MESSAGE, DeprecationWarning, stacklevel=1)
local_ray_verion = _validation_utils.get_local_ray_version()
if ray_version != local_ray_verion:
if custom_images is None and head_node_type.custom_image is None:
install_ray_version = "2.33.0"
logging.info(
"[Ray on Vertex]: Local runtime has Ray version %s"
", but the requested cluster runtime has %s. Please "
"ensure that the Ray versions match for client connectivity. You may "
'"pip install --user --force-reinstall ray[default]==%s"'
" and restart runtime before cluster connection."
% (local_ray_verion, ray_version, install_ray_version)
)
else:
logging.info(
"[Ray on Vertex]: Local runtime has Ray version %s."
"Please ensure that the Ray versions match for client connectivity."
% local_ray_verion
)
if cluster_name is None:
cluster_name = "ray-cluster-" + utils.timestamped_unique_name()
if head_node_type:
if head_node_type.node_count != 1:
raise ValueError(
"[Ray on Vertex AI]: For head_node_type, "
+ "Resources.node_count must be 1."
)
if head_node_type.autoscaling_spec is not None:
raise ValueError(
"[Ray on Vertex AI]: For head_node_type, "
+ "Resources.autoscaling_spec must be None."
)
if (
head_node_type.accelerator_type is None
and head_node_type.accelerator_count > 0
):
raise ValueError(
"[Ray on Vertex]: accelerator_type must be specified when"
+ " accelerator_count is set to a value other than 0."
)
resource_pool_images = {}
# head node
resource_pool_0 = ResourcePool()
resource_pool_0.id = "head-node"
resource_pool_0.replica_count = head_node_type.node_count
resource_pool_0.machine_spec.machine_type = head_node_type.machine_type
resource_pool_0.machine_spec.accelerator_count = head_node_type.accelerator_count
resource_pool_0.machine_spec.accelerator_type = head_node_type.accelerator_type
resource_pool_0.disk_spec.boot_disk_type = head_node_type.boot_disk_type
resource_pool_0.disk_spec.boot_disk_size_gb = head_node_type.boot_disk_size_gb
enable_cuda = True if head_node_type.accelerator_count > 0 else False
if head_node_type.custom_image is not None:
image_uri = head_node_type.custom_image
elif custom_images is None:
image_uri = _validation_utils.get_image_uri(
ray_version, python_version, enable_cuda
)
elif custom_images.head is not None and custom_images.worker is not None:
image_uri = custom_images.head
else:
raise ValueError(
"[Ray on Vertex AI]: custom_images.head and custom_images.worker must be specified when custom_images is set."
)
resource_pool_images[resource_pool_0.id] = image_uri
worker_pools = []
i = 0
if worker_node_types:
for worker_node_type in worker_node_types:
if (
worker_node_type.accelerator_type is None
and worker_node_type.accelerator_count > 0
):
raise ValueError(
"[Ray on Vertex]: accelerator_type must be specified when"
+ " accelerator_count is set to a value other than 0."
)
additional_replica_count = resources._check_machine_spec_identical(
head_node_type, worker_node_type
)
if worker_node_type.autoscaling_spec is None:
# Worker and head share the same MachineSpec, merge them into the
# same ResourcePool
resource_pool_0.replica_count = (
resource_pool_0.replica_count + additional_replica_count
)
else:
if additional_replica_count > 0:
# Autoscaling for single ResourcePool (homogeneous cluster).
resource_pool_0.replica_count = None
resource_pool_0.autoscaling_spec.min_replica_count = (
worker_node_type.autoscaling_spec.min_replica_count
)
resource_pool_0.autoscaling_spec.max_replica_count = (
worker_node_type.autoscaling_spec.max_replica_count
)
if additional_replica_count == 0:
resource_pool = ResourcePool()
resource_pool.id = f"worker-pool{i+1}"
if worker_node_type.autoscaling_spec is None:
resource_pool.replica_count = worker_node_type.node_count
else:
# Autoscaling for worker ResourcePool.
resource_pool.autoscaling_spec.min_replica_count = (
worker_node_type.autoscaling_spec.min_replica_count
)
resource_pool.autoscaling_spec.max_replica_count = (
worker_node_type.autoscaling_spec.max_replica_count
)
resource_pool.machine_spec.machine_type = worker_node_type.machine_type
resource_pool.machine_spec.accelerator_count = (
worker_node_type.accelerator_count
)
resource_pool.machine_spec.accelerator_type = (
worker_node_type.accelerator_type
)
resource_pool.disk_spec.boot_disk_type = worker_node_type.boot_disk_type
resource_pool.disk_spec.boot_disk_size_gb = (
worker_node_type.boot_disk_size_gb
)
worker_pools.append(resource_pool)
enable_cuda = True if worker_node_type.accelerator_count > 0 else False
if worker_node_type.custom_image is not None:
image_uri = worker_node_type.custom_image
elif custom_images is None:
image_uri = _validation_utils.get_image_uri(
ray_version, python_version, enable_cuda
)
else:
image_uri = custom_images.worker
resource_pool_images[resource_pool.id] = image_uri
i += 1
resource_pools = [resource_pool_0] + worker_pools
metrics_collection_disabled = not enable_metrics_collection
ray_metric_spec = RayMetricSpec(disabled=metrics_collection_disabled)
logging_disabled = not enable_logging
ray_logs_spec = RayLogsSpec(disabled=logging_disabled)
ray_spec = RaySpec(
resource_pool_images=resource_pool_images,
ray_metric_spec=ray_metric_spec,
ray_logs_spec=ray_logs_spec,
)
if nfs_mounts:
gapic_nfs_mounts = []
for nfs_mount in nfs_mounts:
gapic_nfs_mounts.append(
NfsMount(
server=nfs_mount.server,
path=nfs_mount.path,
mount_point=nfs_mount.mount_point,
)
)
ray_spec.nfs_mounts = gapic_nfs_mounts
if service_account:
service_account_spec = ServiceAccountSpec(
enable_custom_service_account=True,
service_account=service_account,
)
resource_runtime_spec = ResourceRuntimeSpec(
ray_spec=ray_spec,
service_account_spec=service_account_spec,
)
else:
resource_runtime_spec = ResourceRuntimeSpec(ray_spec=ray_spec)
if psc_interface_config:
gapic_psc_interface_config = PscInterfaceConfig(
network_attachment=psc_interface_config.network_attachment,
)
else:
gapic_psc_interface_config = None
persistent_resource = PersistentResource(
resource_pools=resource_pools,
network=network,
labels=labels,
resource_runtime_spec=resource_runtime_spec,
psc_interface_config=gapic_psc_interface_config,
reserved_ip_ranges=reserved_ip_ranges,
)
location = initializer.global_config.location
project_id = initializer.global_config.project
project_number = resource_manager_utils.get_project_number(project_id)
parent = f"projects/{project_number}/locations/{location}"
request = persistent_resource_service.CreatePersistentResourceRequest(
parent=parent,
persistent_resource=persistent_resource,
persistent_resource_id=cluster_name,
)
client = _gapic_utils.create_persistent_resource_client()
try:
> _ = client.create_persistent_resource(request)
google/cloud/aiplatform/vertex_ray/cluster_init.py:367:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
google/cloud/aiplatform_v1beta1/services/persistent_resource_service/client.py:1006: in create_persistent_resource
response = rpc(
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/gapic_v1/method.py:131: in __call__
return wrapped_func(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
args = (parent: "projects/580378083368/locations/us-central1"
persistent_resource {
resource_pools {
id: "head-node"
...s_spec {
}
}
}
}
persistent_resource_id: "ray-cluster-2025-02-28-00-15-34-test-job-submission-dashboard"
,)
kwargs = {'metadata': [('x-goog-request-params', 'parent=projects/580378083368/locations/us-central1'), ('x-goog-api-client', '.../1.82.0+vertex_ray+top_google_constructor_method+google.cloud.aiplatform.vertex_ray.cluster_init.create_ray_cluster')]}
@functools.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
try:
return callable_(*args, **kwargs)
except grpc.RpcError as exc:
> raise exceptions.from_grpc_error(exc) from exc
E google.api_core.exceptions.FailedPrecondition: 400 You have already provisioned the maximum number of PersistentResources in this region. Please switch to a different region or delete one or more PersistentResources in this region before creating another.
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/grpc_helpers.py:78: FailedPrecondition
The above exception was the direct cause of the following exception:
self =
cluster_ray_version = '2.33'
@pytest.mark.parametrize("cluster_ray_version", ["2.9", "2.33"])
def test_job_submission_dashboard(self, cluster_ray_version):
assert ray.__version__ == RAY_VERSION
aiplatform.init(project=PROJECT_ID, location="us-central1")
head_node_type = vertex_ray.Resources()
worker_node_types = [vertex_ray.Resources()]
timestamp = datetime.datetime.now().strftime("%Y-%m-%d-%H-%M-%S")
# Create cluster, get dashboard address
> cluster_resource_name = vertex_ray.create_ray_cluster(
head_node_type=head_node_type,
worker_node_types=worker_node_types,
cluster_name=f"ray-cluster-{timestamp}-test-job-submission-dashboard",
ray_version=cluster_ray_version,
)
tests/system/vertex_ray/test_job_submission_dashboard.py:49:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
head_node_type = Resources(machine_type='n1-standard-16', node_count=1, accelerator_type=None, accelerator_count=0, boot_disk_type='pd-ssd', boot_disk_size_gb=100, custom_image=None, autoscaling_spec=None)
python_version = '3.10', ray_version = '2.33', network = None
service_account = None
cluster_name = 'ray-cluster-2025-02-28-00-15-34-test-job-submission-dashboard'
worker_node_types = [Resources(machine_type='n1-standard-16', node_count=1, accelerator_type=None, accelerator_count=0, boot_disk_type='pd-ssd', boot_disk_size_gb=100, custom_image=None, autoscaling_spec=None)]
custom_images = None, enable_metrics_collection = True, enable_logging = True
psc_interface_config = None, reserved_ip_ranges = None, nfs_mounts = None
labels = None
def create_ray_cluster(
head_node_type: Optional[resources.Resources] = resources.Resources(),
python_version: Optional[str] = "3.10",
ray_version: Optional[str] = "2.33",
network: Optional[str] = None,
service_account: Optional[str] = None,
cluster_name: Optional[str] = None,
worker_node_types: Optional[List[resources.Resources]] = [resources.Resources()],
custom_images: Optional[resources.NodeImages] = None,
enable_metrics_collection: Optional[bool] = True,
enable_logging: Optional[bool] = True,
psc_interface_config: Optional[resources.PscIConfig] = None,
reserved_ip_ranges: Optional[List[str]] = None,
nfs_mounts: Optional[List[resources.NfsMount]] = None,
labels: Optional[Dict[str, str]] = None,
) -> str:
"""Create a ray cluster on the Vertex AI.
Sample usage:
from vertex_ray import Resources
head_node_type = Resources(
machine_type="n1-standard-8",
node_count=1,
accelerator_type="NVIDIA_TESLA_K80",
accelerator_count=1,
custom_image="us-docker.pkg.dev/my-project/ray-cpu-image.2.9:latest", # Optional
)
worker_node_types = [Resources(
machine_type="n1-standard-8",
node_count=2,
accelerator_type="NVIDIA_TESLA_K80",
accelerator_count=1,
custom_image="us-docker.pkg.dev/my-project/ray-gpu-image.2.9:latest", # Optional
)]
cluster_resource_name = vertex_ray.create_ray_cluster(
head_node_type=head_node_type,
network="projects/my-project-number/global/networks/my-vpc-name", # Optional
service_account="my-service-account@my-project-number.iam.gserviceaccount.com", # Optional
cluster_name="my-cluster-name", # Optional
worker_node_types=worker_node_types,
ray_version="2.9",
)
After a ray cluster is set up, you can call
`ray.init(f"vertex_ray://{cluster_resource_name}", runtime_env=...)` without
specifying ray cluster address to connect to the cluster. To shut down the
cluster you can call `ray.delete_ray_cluster()`.
Note: If the active ray cluster has not finished shutting down, you cannot
create a new ray cluster with the same cluster_name.
Args:
head_node_type: The head node resource. Resources.node_count must be 1.
If not set, default value of Resources() class will be used.
python_version: Python version for the ray cluster.
ray_version: Ray version for the ray cluster. Default is 2.33.0.
network: Virtual private cloud (VPC) network. For Ray Client, VPC
peering is required to connect to the Ray Cluster managed in the
Vertex API service. For Ray Job API, VPC network is not required
because Ray Cluster connection can be accessed through dashboard
address.
service_account: Service account to be used for running Ray programs on
the cluster.
cluster_name: This value may be up to 63 characters, and valid
characters are `[a-z0-9_-]`. The first character cannot be a number
or hyphen.
worker_node_types: The list of Resources of the worker nodes. The same
Resources object should not appear multiple times in the list.
custom_images: The NodeImages which specifies head node and worker nodes
images. All the workers will share the same image. If each Resource
has a specific custom image, use `Resources.custom_image` for
head/worker_node_type(s). Note that configuring `Resources.custom_image`
will override `custom_images` here. Allowlist only.
enable_metrics_collection: Enable Ray metrics collection for visualization.
enable_logging: Enable exporting Ray logs to Cloud Logging.
psc_interface_config: PSC-I config.
reserved_ip_ranges: A list of names for the reserved IP ranges under
the VPC network that can be used for this cluster. If set, we will
deploy the cluster within the provided IP ranges. Otherwise, the
cluster is deployed to any IP ranges under the provided VPC network.
Example: ["vertex-ai-ip-range"].
labels:
The labels with user-defined metadata to organize Ray cluster.
Label keys and values can be no longer than 64 characters (Unicode
codepoints), can only contain lowercase letters, numeric characters,
underscores and dashes. International characters are allowed.
See https://goo.gl/xmQnxf for more information and examples of labels.
Returns:
The cluster_resource_name of the initiated Ray cluster on Vertex.
Raise:
ValueError: If the cluster is not created successfully.
RuntimeError: If the ray_version is 2.4.
"""
if network is None:
logging.info(
"[Ray on Vertex]: No VPC network configured. It is required for client connection."
)
if ray_version == "2.4":
raise RuntimeError(_V2_4_WARNING_MESSAGE)
if ray_version == "2.9.3":
warnings.warn(_V2_9_WARNING_MESSAGE, DeprecationWarning, stacklevel=1)
local_ray_verion = _validation_utils.get_local_ray_version()
if ray_version != local_ray_verion:
if custom_images is None and head_node_type.custom_image is None:
install_ray_version = "2.33.0"
logging.info(
"[Ray on Vertex]: Local runtime has Ray version %s"
", but the requested cluster runtime has %s. Please "
"ensure that the Ray versions match for client connectivity. You may "
'"pip install --user --force-reinstall ray[default]==%s"'
" and restart runtime before cluster connection."
% (local_ray_verion, ray_version, install_ray_version)
)
else:
logging.info(
"[Ray on Vertex]: Local runtime has Ray version %s."
"Please ensure that the Ray versions match for client connectivity."
% local_ray_verion
)
if cluster_name is None:
cluster_name = "ray-cluster-" + utils.timestamped_unique_name()
if head_node_type:
if head_node_type.node_count != 1:
raise ValueError(
"[Ray on Vertex AI]: For head_node_type, "
+ "Resources.node_count must be 1."
)
if head_node_type.autoscaling_spec is not None:
raise ValueError(
"[Ray on Vertex AI]: For head_node_type, "
+ "Resources.autoscaling_spec must be None."
)
if (
head_node_type.accelerator_type is None
and head_node_type.accelerator_count > 0
):
raise ValueError(
"[Ray on Vertex]: accelerator_type must be specified when"
+ " accelerator_count is set to a value other than 0."
)
resource_pool_images = {}
# head node
resource_pool_0 = ResourcePool()
resource_pool_0.id = "head-node"
resource_pool_0.replica_count = head_node_type.node_count
resource_pool_0.machine_spec.machine_type = head_node_type.machine_type
resource_pool_0.machine_spec.accelerator_count = head_node_type.accelerator_count
resource_pool_0.machine_spec.accelerator_type = head_node_type.accelerator_type
resource_pool_0.disk_spec.boot_disk_type = head_node_type.boot_disk_type
resource_pool_0.disk_spec.boot_disk_size_gb = head_node_type.boot_disk_size_gb
enable_cuda = True if head_node_type.accelerator_count > 0 else False
if head_node_type.custom_image is not None:
image_uri = head_node_type.custom_image
elif custom_images is None:
image_uri = _validation_utils.get_image_uri(
ray_version, python_version, enable_cuda
)
elif custom_images.head is not None and custom_images.worker is not None:
image_uri = custom_images.head
else:
raise ValueError(
"[Ray on Vertex AI]: custom_images.head and custom_images.worker must be specified when custom_images is set."
)
resource_pool_images[resource_pool_0.id] = image_uri
worker_pools = []
i = 0
if worker_node_types:
for worker_node_type in worker_node_types:
if (
worker_node_type.accelerator_type is None
and worker_node_type.accelerator_count > 0
):
raise ValueError(
"[Ray on Vertex]: accelerator_type must be specified when"
+ " accelerator_count is set to a value other than 0."
)
additional_replica_count = resources._check_machine_spec_identical(
head_node_type, worker_node_type
)
if worker_node_type.autoscaling_spec is None:
# Worker and head share the same MachineSpec, merge them into the
# same ResourcePool
resource_pool_0.replica_count = (
resource_pool_0.replica_count + additional_replica_count
)
else:
if additional_replica_count > 0:
# Autoscaling for single ResourcePool (homogeneous cluster).
resource_pool_0.replica_count = None
resource_pool_0.autoscaling_spec.min_replica_count = (
worker_node_type.autoscaling_spec.min_replica_count
)
resource_pool_0.autoscaling_spec.max_replica_count = (
worker_node_type.autoscaling_spec.max_replica_count
)
if additional_replica_count == 0:
resource_pool = ResourcePool()
resource_pool.id = f"worker-pool{i+1}"
if worker_node_type.autoscaling_spec is None:
resource_pool.replica_count = worker_node_type.node_count
else:
# Autoscaling for worker ResourcePool.
resource_pool.autoscaling_spec.min_replica_count = (
worker_node_type.autoscaling_spec.min_replica_count
)
resource_pool.autoscaling_spec.max_replica_count = (
worker_node_type.autoscaling_spec.max_replica_count
)
resource_pool.machine_spec.machine_type = worker_node_type.machine_type
resource_pool.machine_spec.accelerator_count = (
worker_node_type.accelerator_count
)
resource_pool.machine_spec.accelerator_type = (
worker_node_type.accelerator_type
)
resource_pool.disk_spec.boot_disk_type = worker_node_type.boot_disk_type
resource_pool.disk_spec.boot_disk_size_gb = (
worker_node_type.boot_disk_size_gb
)
worker_pools.append(resource_pool)
enable_cuda = True if worker_node_type.accelerator_count > 0 else False
if worker_node_type.custom_image is not None:
image_uri = worker_node_type.custom_image
elif custom_images is None:
image_uri = _validation_utils.get_image_uri(
ray_version, python_version, enable_cuda
)
else:
image_uri = custom_images.worker
resource_pool_images[resource_pool.id] = image_uri
i += 1
resource_pools = [resource_pool_0] + worker_pools
metrics_collection_disabled = not enable_metrics_collection
ray_metric_spec = RayMetricSpec(disabled=metrics_collection_disabled)
logging_disabled = not enable_logging
ray_logs_spec = RayLogsSpec(disabled=logging_disabled)
ray_spec = RaySpec(
resource_pool_images=resource_pool_images,
ray_metric_spec=ray_metric_spec,
ray_logs_spec=ray_logs_spec,
)
if nfs_mounts:
gapic_nfs_mounts = []
for nfs_mount in nfs_mounts:
gapic_nfs_mounts.append(
NfsMount(
server=nfs_mount.server,
path=nfs_mount.path,
mount_point=nfs_mount.mount_point,
)
)
ray_spec.nfs_mounts = gapic_nfs_mounts
if service_account:
service_account_spec = ServiceAccountSpec(
enable_custom_service_account=True,
service_account=service_account,
)
resource_runtime_spec = ResourceRuntimeSpec(
ray_spec=ray_spec,
service_account_spec=service_account_spec,
)
else:
resource_runtime_spec = ResourceRuntimeSpec(ray_spec=ray_spec)
if psc_interface_config:
gapic_psc_interface_config = PscInterfaceConfig(
network_attachment=psc_interface_config.network_attachment,
)
else:
gapic_psc_interface_config = None
persistent_resource = PersistentResource(
resource_pools=resource_pools,
network=network,
labels=labels,
resource_runtime_spec=resource_runtime_spec,
psc_interface_config=gapic_psc_interface_config,
reserved_ip_ranges=reserved_ip_ranges,
)
location = initializer.global_config.location
project_id = initializer.global_config.project
project_number = resource_manager_utils.get_project_number(project_id)
parent = f"projects/{project_number}/locations/{location}"
request = persistent_resource_service.CreatePersistentResourceRequest(
parent=parent,
persistent_resource=persistent_resource,
persistent_resource_id=cluster_name,
)
client = _gapic_utils.create_persistent_resource_client()
try:
_ = client.create_persistent_resource(request)
except Exception as e:
> raise ValueError("Failed in cluster creation due to: ", e) from e
E ValueError: ('Failed in cluster creation due to: ', FailedPrecondition('You have already provisioned the maximum number of PersistentResources in this region. Please switch to a different region or delete one or more PersistentResources in this region before creating another.'))
google/cloud/aiplatform/vertex_ray/cluster_init.py:369: ValueError
____________ TestPersistentResource.test_create_persistent_resource ____________
[gw9] linux -- Python 3.10.15 /tmpfs/src/github/python-aiplatform/.nox/system-3-10/bin/python
args = (parent: "projects/ucaip-sample-tests/locations/us-central1"
persistent_resource {
name: "test-pr-e2e--5ae0e4b4-1358...dard-4"
}
replica_count: 2
}
}
persistent_resource_id: "test-pr-e2e--5ae0e4b4-1358-4fd4-b2ea-2faab2c677b3"
,)
kwargs = {'metadata': [('x-goog-request-params', 'parent=projects/ucaip-sample-tests/locations/us-central1'), ('x-goog-api-clie...thon/3.10.15 grpc/1.51.3 gax/2.21.0 gapic/1.82.0+top_google_constructor_method+google.cloud.aiplatform.base.wrapper')]}
@functools.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
try:
> return callable_(*args, **kwargs)
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/grpc_helpers.py:76:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
.nox/system-3-10/lib/python3.10/site-packages/grpc/_interceptor.py:247: in __call__
response, ignored_call = self._with_call(request,
.nox/system-3-10/lib/python3.10/site-packages/grpc/_interceptor.py:290: in _with_call
return call.result(), call
.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:343: in result
raise self
.nox/system-3-10/lib/python3.10/site-packages/grpc/_interceptor.py:274: in continuation
response, call = self._thunk(new_method).with_call(
.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:957: in with_call
return _end_unary_response_blocking(state, call, True, None)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
state =
call =
with_call = True, deadline = None
def _end_unary_response_blocking(state, call, with_call, deadline):
if state.code is grpc.StatusCode.OK:
if with_call:
rendezvous = _MultiThreadedRendezvous(state, call, None, deadline)
return state.response, rendezvous
else:
return state.response
else:
> raise _InactiveRpcError(state)
E grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
E status = StatusCode.FAILED_PRECONDITION
E details = "You have already provisioned the maximum number of PersistentResources in this region. Please switch to a different region or delete one or more PersistentResources in this region before creating another."
E debug_error_string = "UNKNOWN:Error received from peer ipv4:142.251.188.95:443 {created_time:"2025-02-28T01:53:10.86930611+00:00", grpc_status:9, grpc_message:"You have already provisioned the maximum number of PersistentResources in this region. Please switch to a different region or delete one or more PersistentResources in this region before creating another."}"
E >
.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:849: _InactiveRpcError
The above exception was the direct cause of the following exception:
self =
shared_state = {}
def test_create_persistent_resource(self, shared_state):
# PersistentResource ID must be shorter than 64 characters.
# IE: "test-pr-e2e-ea3ae19d-3d94-4818-8ecd-1a7a63d7418c"
resource_id = self._make_display_name("")
resource_pools = [
gca_persistent_resource.ResourcePool(
machine_spec=gca_machine_resources.MachineSpec(
machine_type=_TEST_MACHINE_TYPE,
),
replica_count=_TEST_INITIAL_REPLICA_COUNT,
)
]
> test_resource = persistent_resource.PersistentResource.create(
persistent_resource_id=resource_id, resource_pools=resource_pools
)
tests/system/aiplatform/test_persistent_resource.py:61:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
google/cloud/aiplatform/base.py:863: in wrapper
return method(*args, **kwargs)
google/cloud/aiplatform/persistent_resource.py:309: in create
create_lro = cls._create(
google/cloud/aiplatform/persistent_resource.py:376: in _create
return api_client.create_persistent_resource(
google/cloud/aiplatform_v1/services/persistent_resource_service/client.py:961: in create_persistent_resource
response = rpc(
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/gapic_v1/method.py:131: in __call__
return wrapped_func(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
args = (parent: "projects/ucaip-sample-tests/locations/us-central1"
persistent_resource {
name: "test-pr-e2e--5ae0e4b4-1358...dard-4"
}
replica_count: 2
}
}
persistent_resource_id: "test-pr-e2e--5ae0e4b4-1358-4fd4-b2ea-2faab2c677b3"
,)
kwargs = {'metadata': [('x-goog-request-params', 'parent=projects/ucaip-sample-tests/locations/us-central1'), ('x-goog-api-clie...thon/3.10.15 grpc/1.51.3 gax/2.21.0 gapic/1.82.0+top_google_constructor_method+google.cloud.aiplatform.base.wrapper')]}
@functools.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
try:
return callable_(*args, **kwargs)
except grpc.RpcError as exc:
> raise exceptions.from_grpc_error(exc) from exc
E google.api_core.exceptions.FailedPrecondition: 400 You have already provisioned the maximum number of PersistentResources in this region. Please switch to a different region or delete one or more PersistentResources in this region before creating another.
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/grpc_helpers.py:78: FailedPrecondition
______ TestModelDeploymentMonitoring.test_mdm_two_models_one_valid_config ______
[gw5] linux -- Python 3.10.15 /tmpfs/src/github/python-aiplatform/.nox/system-3-10/bin/python
args = (parent: "projects/ucaip-sample-tests/locations/us-central1"
model_deployment_monitoring_job {
display_name: "temp_e...ocacyorg.joonix.net"
}
enable_logging: true
}
sample_predict_instance {
null_value: NULL_VALUE
}
}
,)
kwargs = {'metadata': [('x-goog-request-params', 'parent=projects/ucaip-sample-tests/locations/us-central1'), ('x-goog-api-clie...0+top_google_constructor_method+google.cloud.aiplatform.jobs.ModelDeploymentMonitoringJob.create')], 'timeout': 3600.0}
@functools.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
try:
> return callable_(*args, **kwargs)
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/grpc_helpers.py:76:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
.nox/system-3-10/lib/python3.10/site-packages/grpc/_interceptor.py:247: in __call__
response, ignored_call = self._with_call(request,
.nox/system-3-10/lib/python3.10/site-packages/grpc/_interceptor.py:290: in _with_call
return call.result(), call
.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:343: in result
raise self
.nox/system-3-10/lib/python3.10/site-packages/grpc/_interceptor.py:274: in continuation
response, call = self._thunk(new_method).with_call(
.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:957: in with_call
return _end_unary_response_blocking(state, call, True, None)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
state =
call =
with_call = True, deadline = None
def _end_unary_response_blocking(state, call, with_call, deadline):
if state.code is grpc.StatusCode.OK:
if with_call:
rendezvous = _MultiThreadedRendezvous(state, call, None, deadline)
return state.response, rendezvous
else:
return state.response
else:
> raise _InactiveRpcError(state)
E grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
E status = StatusCode.PERMISSION_DENIED
E details = "Vertex AI Service Agent service-580378083368@gcp-sa-aiplatform.iam.gserviceaccount.com does not have the requisite access to BigQuery [bq://mco-mm.bqmlga4.train]. Ensure that the service account has been granted the bigquery.tables.get permission and try again."
E debug_error_string = "UNKNOWN:Error received from peer ipv4:74.125.197.95:443 {created_time:"2025-02-28T01:54:16.985923361+00:00", grpc_status:7, grpc_message:"Vertex AI Service Agent service-580378083368@gcp-sa-aiplatform.iam.gserviceaccount.com does not have the requisite access to BigQuery [bq://mco-mm.bqmlga4.train]. Ensure that the service account has been granted the bigquery.tables.get permission and try again."}"
E >
.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:849: _InactiveRpcError
The above exception was the direct cause of the following exception:
self =
shared_state = {'resources': [
resource name: projects/580378083368/locations/us-central1/endpoints/8528978766867726336]}
def test_mdm_two_models_one_valid_config(self, shared_state):
"""
Enable model monitoring on two existing models deployed to the same endpoint.
"""
assert len(shared_state["resources"]) == 1
self.endpoint = shared_state["resources"][0]
aiplatform.init(project=e2e_base._PROJECT, location=e2e_base._LOCATION)
# test model monitoring configurations
> job = aiplatform.ModelDeploymentMonitoringJob.create(
display_name=self._make_display_name(key=JOB_NAME),
logging_sampling_strategy=sampling_strategy,
schedule_config=schedule_config,
alert_config=email_alert_config,
objective_configs=objective_config,
create_request_timeout=3600,
project=e2e_base._PROJECT,
location=e2e_base._LOCATION,
endpoint=self.endpoint,
)
tests/system/aiplatform/test_model_monitoring.py:157:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
google/cloud/aiplatform/jobs.py:3479: in create
self._gca_resource = self.api_client.create_model_deployment_monitoring_job(
google/cloud/aiplatform_v1/services/job_service/client.py:4469: in create_model_deployment_monitoring_job
response = rpc(
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/gapic_v1/method.py:131: in __call__
return wrapped_func(*args, **kwargs)
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/timeout.py:120: in func_with_timeout
return func(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
args = (parent: "projects/ucaip-sample-tests/locations/us-central1"
model_deployment_monitoring_job {
display_name: "temp_e...ocacyorg.joonix.net"
}
enable_logging: true
}
sample_predict_instance {
null_value: NULL_VALUE
}
}
,)
kwargs = {'metadata': [('x-goog-request-params', 'parent=projects/ucaip-sample-tests/locations/us-central1'), ('x-goog-api-clie...0+top_google_constructor_method+google.cloud.aiplatform.jobs.ModelDeploymentMonitoringJob.create')], 'timeout': 3600.0}
@functools.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
try:
return callable_(*args, **kwargs)
except grpc.RpcError as exc:
> raise exceptions.from_grpc_error(exc) from exc
E google.api_core.exceptions.PermissionDenied: 403 Vertex AI Service Agent service-580378083368@gcp-sa-aiplatform.iam.gserviceaccount.com does not have the requisite access to BigQuery [bq://mco-mm.bqmlga4.train]. Ensure that the service account has been granted the bigquery.tables.get permission and try again.
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/grpc_helpers.py:78: PermissionDenied
------------------------------ Captured log call -------------------------------
INFO google.cloud.aiplatform.jobs:base.py:85 Creating ModelDeploymentMonitoringJob
_____ TestModelDeploymentMonitoring.test_mdm_two_models_two_valid_configs ______
[gw5] linux -- Python 3.10.15 /tmpfs/src/github/python-aiplatform/.nox/system-3-10/bin/python
args = (parent: "projects/ucaip-sample-tests/locations/us-central1"
model_deployment_monitoring_job {
display_name: "temp_e...ocacyorg.joonix.net"
}
enable_logging: true
}
sample_predict_instance {
null_value: NULL_VALUE
}
}
,)
kwargs = {'metadata': [('x-goog-request-params', 'parent=projects/ucaip-sample-tests/locations/us-central1'), ('x-goog-api-clie...0+top_google_constructor_method+google.cloud.aiplatform.jobs.ModelDeploymentMonitoringJob.create')], 'timeout': 3600.0}
@functools.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
try:
> return callable_(*args, **kwargs)
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/grpc_helpers.py:76:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
.nox/system-3-10/lib/python3.10/site-packages/grpc/_interceptor.py:247: in __call__
response, ignored_call = self._with_call(request,
.nox/system-3-10/lib/python3.10/site-packages/grpc/_interceptor.py:290: in _with_call
return call.result(), call
.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:343: in result
raise self
.nox/system-3-10/lib/python3.10/site-packages/grpc/_interceptor.py:274: in continuation
response, call = self._thunk(new_method).with_call(
.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:957: in with_call
return _end_unary_response_blocking(state, call, True, None)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
state =
call =
with_call = True, deadline = None
def _end_unary_response_blocking(state, call, with_call, deadline):
if state.code is grpc.StatusCode.OK:
if with_call:
rendezvous = _MultiThreadedRendezvous(state, call, None, deadline)
return state.response, rendezvous
else:
return state.response
else:
> raise _InactiveRpcError(state)
E grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
E status = StatusCode.PERMISSION_DENIED
E details = "Vertex AI Service Agent service-580378083368@gcp-sa-aiplatform.iam.gserviceaccount.com does not have the requisite access to BigQuery [bq://mco-mm.bqmlga4.train]. Ensure that the service account has been granted the bigquery.tables.get permission and try again."
E debug_error_string = "UNKNOWN:Error received from peer ipv4:74.125.197.95:443 {created_time:"2025-02-28T01:54:19.205617897+00:00", grpc_status:7, grpc_message:"Vertex AI Service Agent service-580378083368@gcp-sa-aiplatform.iam.gserviceaccount.com does not have the requisite access to BigQuery [bq://mco-mm.bqmlga4.train]. Ensure that the service account has been granted the bigquery.tables.get permission and try again."}"
E >
.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:849: _InactiveRpcError
The above exception was the direct cause of the following exception:
self =
shared_state = {'resources': [
resource name: projects/580378083368/locations/us-central1/endpoints/8528978766867726336]}
def test_mdm_two_models_two_valid_configs(self, shared_state):
assert len(shared_state["resources"]) == 1
self.endpoint = shared_state["resources"][0]
aiplatform.init(project=e2e_base._PROJECT, location=e2e_base._LOCATION)
[deployed_model1, deployed_model2] = list(
map(lambda x: x.id, self.endpoint.list_models())
)
all_configs = {
deployed_model1: objective_config,
deployed_model2: objective_config2,
}
> job = aiplatform.ModelDeploymentMonitoringJob.create(
display_name=self._make_display_name(key=JOB_NAME),
logging_sampling_strategy=sampling_strategy,
schedule_config=schedule_config,
alert_config=email_alert_config,
objective_configs=all_configs,
create_request_timeout=3600,
project=e2e_base._PROJECT,
location=e2e_base._LOCATION,
endpoint=self.endpoint,
)
tests/system/aiplatform/test_model_monitoring.py:292:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
google/cloud/aiplatform/jobs.py:3479: in create
self._gca_resource = self.api_client.create_model_deployment_monitoring_job(
google/cloud/aiplatform_v1/services/job_service/client.py:4469: in create_model_deployment_monitoring_job
response = rpc(
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/gapic_v1/method.py:131: in __call__
return wrapped_func(*args, **kwargs)
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/timeout.py:120: in func_with_timeout
return func(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
args = (parent: "projects/ucaip-sample-tests/locations/us-central1"
model_deployment_monitoring_job {
display_name: "temp_e...ocacyorg.joonix.net"
}
enable_logging: true
}
sample_predict_instance {
null_value: NULL_VALUE
}
}
,)
kwargs = {'metadata': [('x-goog-request-params', 'parent=projects/ucaip-sample-tests/locations/us-central1'), ('x-goog-api-clie...0+top_google_constructor_method+google.cloud.aiplatform.jobs.ModelDeploymentMonitoringJob.create')], 'timeout': 3600.0}
@functools.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
try:
return callable_(*args, **kwargs)
except grpc.RpcError as exc:
> raise exceptions.from_grpc_error(exc) from exc
E google.api_core.exceptions.PermissionDenied: 403 Vertex AI Service Agent service-580378083368@gcp-sa-aiplatform.iam.gserviceaccount.com does not have the requisite access to BigQuery [bq://mco-mm.bqmlga4.train]. Ensure that the service account has been granted the bigquery.tables.get permission and try again.
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/grpc_helpers.py:78: PermissionDenied
------------------------------ Captured log call -------------------------------
INFO google.cloud.aiplatform.jobs:base.py:85 Creating ModelDeploymentMonitoringJob
___ TestModelDeploymentMonitoring.test_mdm_notification_channel_alert_config ___
[gw5] linux -- Python 3.10.15 /tmpfs/src/github/python-aiplatform/.nox/system-3-10/bin/python
args = (parent: "projects/ucaip-sample-tests/locations/us-central1"
model_deployment_monitoring_job {
display_name: "temp_e...le-tests/notificationChannels/11578134490450491958"
}
sample_predict_instance {
null_value: NULL_VALUE
}
}
,)
kwargs = {'metadata': [('x-goog-request-params', 'parent=projects/ucaip-sample-tests/locations/us-central1'), ('x-goog-api-clie...0+top_google_constructor_method+google.cloud.aiplatform.jobs.ModelDeploymentMonitoringJob.create')], 'timeout': 3600.0}
@functools.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
try:
> return callable_(*args, **kwargs)
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/grpc_helpers.py:76:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
.nox/system-3-10/lib/python3.10/site-packages/grpc/_interceptor.py:247: in __call__
response, ignored_call = self._with_call(request,
.nox/system-3-10/lib/python3.10/site-packages/grpc/_interceptor.py:290: in _with_call
return call.result(), call
.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:343: in result
raise self
.nox/system-3-10/lib/python3.10/site-packages/grpc/_interceptor.py:274: in continuation
response, call = self._thunk(new_method).with_call(
.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:957: in with_call
return _end_unary_response_blocking(state, call, True, None)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
state =
call =
with_call = True, deadline = None
def _end_unary_response_blocking(state, call, with_call, deadline):
if state.code is grpc.StatusCode.OK:
if with_call:
rendezvous = _MultiThreadedRendezvous(state, call, None, deadline)
return state.response, rendezvous
else:
return state.response
else:
> raise _InactiveRpcError(state)
E grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
E status = StatusCode.PERMISSION_DENIED
E details = "Vertex AI Service Agent service-580378083368@gcp-sa-aiplatform.iam.gserviceaccount.com does not have the requisite access to BigQuery [bq://mco-mm.bqmlga4.train]. Ensure that the service account has been granted the bigquery.tables.get permission and try again."
E debug_error_string = "UNKNOWN:Error received from peer ipv4:74.125.197.95:443 {grpc_message:"Vertex AI Service Agent service-580378083368@gcp-sa-aiplatform.iam.gserviceaccount.com does not have the requisite access to BigQuery [bq://mco-mm.bqmlga4.train]. Ensure that the service account has been granted the bigquery.tables.get permission and try again.", grpc_status:7, created_time:"2025-02-28T01:54:21.798379881+00:00"}"
E >
.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:849: _InactiveRpcError
The above exception was the direct cause of the following exception:
self =
shared_state = {'resources': [
resource name: projects/580378083368/locations/us-central1/endpoints/8528978766867726336]}
def test_mdm_notification_channel_alert_config(self, shared_state):
self.endpoint = shared_state["resources"][0]
aiplatform.init(project=e2e_base._PROJECT, location=e2e_base._LOCATION)
# Reset objective_config.explanation_config
objective_config.explanation_config = None
# test model monitoring configurations
> job = aiplatform.ModelDeploymentMonitoringJob.create(
display_name=self._make_display_name(key=JOB_NAME),
logging_sampling_strategy=sampling_strategy,
schedule_config=schedule_config,
alert_config=alert_config,
objective_configs=objective_config,
create_request_timeout=3600,
project=e2e_base._PROJECT,
location=e2e_base._LOCATION,
endpoint=self.endpoint,
)
tests/system/aiplatform/test_model_monitoring.py:418:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
google/cloud/aiplatform/jobs.py:3479: in create
self._gca_resource = self.api_client.create_model_deployment_monitoring_job(
google/cloud/aiplatform_v1/services/job_service/client.py:4469: in create_model_deployment_monitoring_job
response = rpc(
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/gapic_v1/method.py:131: in __call__
return wrapped_func(*args, **kwargs)
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/timeout.py:120: in func_with_timeout
return func(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
args = (parent: "projects/ucaip-sample-tests/locations/us-central1"
model_deployment_monitoring_job {
display_name: "temp_e...le-tests/notificationChannels/11578134490450491958"
}
sample_predict_instance {
null_value: NULL_VALUE
}
}
,)
kwargs = {'metadata': [('x-goog-request-params', 'parent=projects/ucaip-sample-tests/locations/us-central1'), ('x-goog-api-clie...0+top_google_constructor_method+google.cloud.aiplatform.jobs.ModelDeploymentMonitoringJob.create')], 'timeout': 3600.0}
@functools.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
try:
return callable_(*args, **kwargs)
except grpc.RpcError as exc:
> raise exceptions.from_grpc_error(exc) from exc
E google.api_core.exceptions.PermissionDenied: 403 Vertex AI Service Agent service-580378083368@gcp-sa-aiplatform.iam.gserviceaccount.com does not have the requisite access to BigQuery [bq://mco-mm.bqmlga4.train]. Ensure that the service account has been granted the bigquery.tables.get permission and try again.
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/grpc_helpers.py:78: PermissionDenied
------------------------------ Captured log call -------------------------------
INFO google.cloud.aiplatform.jobs:base.py:85 Creating ModelDeploymentMonitoringJob
---------------------------- Captured log teardown -----------------------------
INFO google.cloud.aiplatform.models:base.py:189 Undeploying Endpoint model: projects/580378083368/locations/us-central1/endpoints/8528978766867726336
INFO google.cloud.aiplatform.models:base.py:209 Undeploy Endpoint model backing LRO: projects/580378083368/locations/us-central1/endpoints/8528978766867726336/operations/7844716500098220032
INFO google.cloud.aiplatform.models:base.py:222 Endpoint model undeployed. Resource name: projects/580378083368/locations/us-central1/endpoints/8528978766867726336
INFO google.cloud.aiplatform.models:base.py:189 Undeploying Endpoint model: projects/580378083368/locations/us-central1/endpoints/8528978766867726336
INFO google.cloud.aiplatform.models:base.py:209 Undeploy Endpoint model backing LRO: projects/580378083368/locations/us-central1/endpoints/8528978766867726336/operations/6921478576487268352
INFO google.cloud.aiplatform.models:base.py:222 Endpoint model undeployed. Resource name: projects/580378083368/locations/us-central1/endpoints/8528978766867726336
INFO google.cloud.aiplatform.base:base.py:189 Deleting Endpoint : projects/580378083368/locations/us-central1/endpoints/8528978766867726336
INFO google.cloud.aiplatform.base:base.py:222 Endpoint deleted. . Resource name: projects/580378083368/locations/us-central1/endpoints/8528978766867726336
INFO google.cloud.aiplatform.base:base.py:156 Deleting Endpoint resource: projects/580378083368/locations/us-central1/endpoints/8528978766867726336
INFO google.cloud.aiplatform.base:base.py:161 Delete Endpoint backing LRO: projects/580378083368/locations/us-central1/operations/1654518812277473280
INFO google.cloud.aiplatform.base:base.py:174 Endpoint resource projects/580378083368/locations/us-central1/endpoints/8528978766867726336 deleted.
=============================== warnings summary ===============================
.nox/system-3-10/lib/python3.10/site-packages/google/cloud/storage/_http.py:19: 16 warnings
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/google/cloud/storage/_http.py:19: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html
import pkg_resources
.nox/system-3-10/lib/python3.10/site-packages/pkg_resources/__init__.py:2832: 32 warnings
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pkg_resources/__init__.py:2832: DeprecationWarning: Deprecated call to `pkg_resources.declare_namespace('google')`.
Implementing implicit namespace packages (as specified in PEP 420) is preferred to `pkg_resources.declare_namespace`. See https://setuptools.pypa.io/en/latest/references/keywords.html#keyword-namespace-packages
declare_namespace(pkg)
.nox/system-3-10/lib/python3.10/site-packages/pkg_resources/__init__.py:2832: 32 warnings
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pkg_resources/__init__.py:2832: DeprecationWarning: Deprecated call to `pkg_resources.declare_namespace('google.cloud')`.
Implementing implicit namespace packages (as specified in PEP 420) is preferred to `pkg_resources.declare_namespace`. See https://setuptools.pypa.io/en/latest/references/keywords.html#keyword-namespace-packages
declare_namespace(pkg)
.nox/system-3-10/lib/python3.10/site-packages/pkg_resources/__init__.py:2317: 16 warnings
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pkg_resources/__init__.py:2317: DeprecationWarning: Deprecated call to `pkg_resources.declare_namespace('google')`.
Implementing implicit namespace packages (as specified in PEP 420) is preferred to `pkg_resources.declare_namespace`. See https://setuptools.pypa.io/en/latest/references/keywords.html#keyword-namespace-packages
declare_namespace(parent)
tests/system/aiplatform/test_experiments.py: 38 warnings
tests/system/aiplatform/test_autologging.py: 5 warnings
tests/system/aiplatform/test_custom_job.py: 2 warnings
tests/system/aiplatform/test_model_evaluation.py: 2 warnings
/tmpfs/src/github/python-aiplatform/google/cloud/aiplatform/utils/_ipython_utils.py:149: DeprecationWarning: Importing display from IPython.core.display is deprecated since IPython 7.14, please import from IPython display
from IPython.core.display import display
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_text_generation_model_predict_async
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_async[grpc-PROD_ENDPOINT]
tests/system/aiplatform/test_model_interactions.py::TestModelInteractions::test_endpoint_predict_async
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pytest_asyncio/plugin.py:867: DeprecationWarning: The event_loop fixture provided by pytest-asyncio has been redefined in
/tmpfs/src/github/python-aiplatform/tests/system/aiplatform/e2e_base.py:212
Replacing the event_loop fixture with a custom implementation is deprecated
and will lead to errors in the future.
If you want to request an asyncio event loop with a scope other than function
scope, use the "loop_scope" argument to the asyncio mark when marking the tests.
If you want to return different types of event loops, use the event_loop_policy
fixture.
warnings.warn(
tests/system/aiplatform/test_experiment_model.py::TestExperimentModel::test_xgboost_booster_with_custom_uri
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/xgboost/core.py:158: UserWarning: [23:35:45] WARNING: /workspace/src/c_api/c_api.cc:1374: Saving model in the UBJSON format as default. You can use file extension: `json`, `ubj` or `deprecated` to choose between formats.
warnings.warn(smsg, UserWarning)
tests/system/aiplatform/test_experiment_model.py::TestExperimentModel::test_xgboost_xgbmodel_with_custom_names
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/xgboost/core.py:158: UserWarning: [23:35:47] WARNING: /workspace/src/c_api/c_api.cc:1374: Saving model in the UBJSON format as default. You can use file extension: `json`, `ubj` or `deprecated` to choose between formats.
warnings.warn(smsg, UserWarning)
tests/system/aiplatform/test_pipeline_job.py::TestPipelineJob::test_add_pipeline_job_to_experiment
tests/system/aiplatform/test_experiments.py::TestExperiments::test_add_pipeline_job_to_experiment
tests/system/aiplatform/test_pipeline_job_schedule.py::TestPipelineJobSchedule::test_create_get_pause_resume_update_list
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/kfp/dsl/component_decorator.py:126: FutureWarning: The default base_image used by the @dsl.component decorator will switch from 'python:3.9' to 'python:3.10' on Oct 1, 2025. To ensure your existing components work with versions of the KFP SDK released after that date, you should provide an explicit base_image argument and ensure your component works as intended on Python 3.10.
return component_factory.create_component_from_func(
tests/system/aiplatform/test_pipeline_job.py::TestPipelineJob::test_add_pipeline_job_to_experiment
tests/system/aiplatform/test_pipeline_job_schedule.py::TestPipelineJobSchedule::test_create_get_pause_resume_update_list
/tmpfs/src/github/python-aiplatform/google/cloud/aiplatform/pipeline_jobs.py:902: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
_LOGGER.warn(
tests/system/aiplatform/test_experiments.py::TestExperiments::test_add_pipeline_job_to_experiment
/tmpfs/src/github/python-aiplatform/tests/system/aiplatform/test_experiments.py:376: DeprecationWarning: The module `kfp.v2` is deprecated and will be removed in a futureversion. Please import directly from the `kfp` namespace, instead of `kfp.v2`.
import kfp.v2.dsl as dsl
tests/system/aiplatform/test_experiments.py::TestExperiments::test_add_pipeline_job_to_experiment
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/kfp/compiler/compiler.py:81: DeprecationWarning: Compiling to JSON is deprecated and will be removed in a future version. Please compile to a YAML file by providing a file path with a .yaml extension instead.
builder.write_pipeline_spec_to_file(
tests/system/aiplatform/test_prediction_cpr.py::TestPredictionCpr::test_build_cpr_model_upload_and_deploy
tests/system/aiplatform/test_prediction_cpr.py::TestPredictionCpr::test_build_cpr_model_upload_and_deploy
/usr/local/lib/python3.10/subprocess.py:955: RuntimeWarning: line buffering (buffering=1) isn't supported in binary mode, the default buffer size will be used
self.stdin = io.open(p2cwrite, 'wb', bufsize)
tests/system/aiplatform/test_prediction_cpr.py::TestPredictionCpr::test_build_cpr_model_upload_and_deploy
tests/system/aiplatform/test_prediction_cpr.py::TestPredictionCpr::test_build_cpr_model_upload_and_deploy
/usr/local/lib/python3.10/subprocess.py:961: RuntimeWarning: line buffering (buffering=1) isn't supported in binary mode, the default buffer size will be used
self.stdout = io.open(c2pread, 'rb', bufsize)
tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_batch_serve_to_df
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pyarrow/pandas_compat.py:735: DeprecationWarning: DatetimeTZBlock is deprecated and will be removed in a future version. Use public APIs instead.
klass=_int.DatetimeTZBlock,
tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_batch_serve_to_df
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pandas/core/frame.py:717: DeprecationWarning: Passing a BlockManager to DataFrame is deprecated and will raise in a future version. Use public APIs instead.
warnings.warn(
tests/system/aiplatform/test_e2e_tabular.py::TestEndToEndTabular::test_end_to_end_tabular
/tmpfs/src/github/python-aiplatform/tests/system/aiplatform/test_e2e_tabular.py:203: PendingDeprecationWarning: Blob.download_as_string() is deprecated and will be removed in future. Use Blob.download_as_bytes() instead.
error_output_filestr = blob.download_as_string().decode()
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
- generated xml file: /tmpfs/src/github/python-aiplatform/system_3.10_sponge_log.xml -
=========================== short test summary info ============================
FAILED tests/system/aiplatform/test_experiment_model.py::TestExperimentModel::test_xgboost_booster_with_custom_uri
FAILED tests/system/aiplatform/test_experiment_model.py::TestExperimentModel::test_xgboost_xgbmodel_with_custom_names
FAILED tests/system/aiplatform/test_autologging.py::TestAutologging::test_autologging_with_autorun_creation
FAILED tests/system/aiplatform/test_experiment_model.py::TestExperimentModel::test_tensorflow_keras_model_with_input_example
FAILED tests/system/aiplatform/test_experiment_model.py::TestExperimentModel::test_tensorflow_module_with_gpu_container
FAILED tests/system/aiplatform/test_prediction_cpr.py::TestPredictionCpr::test_build_cpr_model_upload_and_deploy
FAILED tests/system/aiplatform/test_experiment_model.py::TestExperimentModel::test_deploy_model_with_gpu_container
FAILED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_function_calling[grpc-PROD_ENDPOINT]
FAILED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_function_calling[rest-PROD_ENDPOINT]
FAILED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_content_is_function_response[gemini-1.0-pro-PROD_ENDPOINT]
FAILED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_content_is_function_response[gemini-1.5-pro-PROD_ENDPOINT]
FAILED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_content_is_function_response[gemini-1.5-flash-PROD_ENDPOINT]
FAILED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_content_is_function_response[gemini-1.5-flash-002-PROD_ENDPOINT]
FAILED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_content_is_function_response[gemini-1.5-pro-002-PROD_ENDPOINT]
FAILED tests/system/vertex_ray/test_ray_data.py::TestRayData::test_ray_data[2.9]
FAILED tests/system/vertex_ray/test_ray_data.py::TestRayData::test_ray_data[2.33]
FAILED tests/system/vertex_ray/test_cluster_management.py::TestClusterManagement::test_cluster_management[2.33]
FAILED tests/system/vertex_ray/test_job_submission_dashboard.py::TestJobSubmissionDashboard::test_job_submission_dashboard[2.9]
FAILED tests/system/vertex_ray/test_job_submission_dashboard.py::TestJobSubmissionDashboard::test_job_submission_dashboard[2.33]
FAILED tests/system/aiplatform/test_persistent_resource.py::TestPersistentResource::test_create_persistent_resource
FAILED tests/system/aiplatform/test_model_monitoring.py::TestModelDeploymentMonitoring::test_mdm_two_models_one_valid_config
FAILED tests/system/aiplatform/test_model_monitoring.py::TestModelDeploymentMonitoring::test_mdm_two_models_two_valid_configs
FAILED tests/system/aiplatform/test_model_monitoring.py::TestModelDeploymentMonitoring::test_mdm_notification_channel_alert_config
===== 23 failed, 219 passed, 6 skipped, 162 warnings in 8707.72s (2:25:07) =====
nox > Command py.test -v --junitxml=system_3.10_sponge_log.xml tests/system failed with exit code 1
nox > Session system-3.10 failed.
[FlakyBot] Sending logs to Flaky Bot...
[FlakyBot] See https://github.com/googleapis/repo-automation-bots/tree/main/packages/flakybot.
[FlakyBot] Published system_3.10_sponge_log.xml (14116925504925142)!
[FlakyBot] Done!
cleanup
[ID: 4692988] Command finished after 9038 secs, exit value: 1
Warning: Permanently added 'localhost' (ED25519) to the list of known hosts.
[17:59:05 PST] Collecting build artifacts from build VM
Build script failed with exit code: 1