Build/Test Explorer

TestFusion
Invocation status: Failed

Kokoro: cloud-devrel/client-libraries/python/googleapis/python-aiplatform/continuous/system

2 targets evaluated on for 3 hr, 35 min, 46 sec
by kokoro-github-subscriber
2 Failed

Showing build.log

Download
Warning: Permanently added 'localhost' (ED25519) to the list of known hosts.
Warning: Permanently added 'localhost' (ED25519) to the list of known hosts.
Warning: Permanently added 'localhost' (ED25519) to the list of known hosts.
[15:07:11 PST] Transferring environment variable script to build VM
[15:07:12 PST] Transferring kokoro_log_reader.py to build VM
[15:07:13 PST] Transferring source code to build VM
[15:07:29 PST] Executing build script on build VM



[ID: 5338615] Executing command via SSH:
export KOKORO_BUILD_NUMBER="3142"
export KOKORO_JOB_NAME="cloud-devrel/client-libraries/python/googleapis/python-aiplatform/continuous/system"
source /tmpfs/kokoro-env_vars.sh; cd /tmpfs/src/ ; chmod 755 github/python-aiplatform/.kokoro/trampoline.sh ; PYTHON_3_VERSION="$(pyenv which python3 2> /dev/null || which python3)" ; PYTHON_2_VERSION="$(pyenv which python2 2> /dev/null || which python2)" ; if "$PYTHON_3_VERSION" -c "import psutil" ; then KOKORO_PYTHON_COMMAND="$PYTHON_3_VERSION" ; else KOKORO_PYTHON_COMMAND="$PYTHON_2_VERSION" ; fi > /dev/null 2>&1 ; echo "export KOKORO_PYTHON_COMMAND="$KOKORO_PYTHON_COMMAND"" > "$HOME/.kokoro_python_vars" ; nohup bash -c "( rm -f /tmpfs/kokoro_build_exit_code ; github/python-aiplatform/.kokoro/trampoline.sh ; echo \${PIPESTATUS[0]} > /tmpfs/kokoro_build_exit_code ) > /tmpfs/kokoro_build.log 2>&1" > /dev/null 2>&1 & echo $! > /tmpfs/kokoro_build.pid ; source "$HOME/.kokoro_python_vars" ; "$KOKORO_PYTHON_COMMAND" /tmpfs/kokoro_log_reader.py /tmpfs/kokoro_build.log /tmpfs/kokoro_build_exit_code /tmpfs/kokoro_build.pid /tmpfs/kokoro_log_reader.pid --start_byte 0

2024-11-19 15:07:30 Creating folder on disk for secrets: /tmpfs/src/gfile/secret_manager
Activated service account credentials for: [kokoro-trampoline@cloud-devrel-kokoro-resources.iam.gserviceaccount.com]
WARNING: Your config file at [/home/kbuilder/.docker/config.json] contains these credential helper entries:

{
"credHelpers": {
"gcr.io": "gcr",
"us.gcr.io": "gcr",
"asia.gcr.io": "gcr",
"staging-k8s.gcr.io": "gcr",
"eu.gcr.io": "gcr"
}
}
These will be overwritten.
Docker configuration file updated.
Using default tag: latest
latest: Pulling from cloud-devrel-kokoro-resources/python-multi
ff65ddf9395b: Pulling fs layer
cdd9d79692c1: Pulling fs layer
e4862826b2aa: Pulling fs layer
d32630687f55: Pulling fs layer
dfa7816fe874: Pulling fs layer
0a8e089113ba: Pulling fs layer
d8c355139ad8: Pulling fs layer
cf2983c386f5: Pulling fs layer
f1826a77298e: Pulling fs layer
046f509f9618: Pulling fs layer
8614918c0f65: Pulling fs layer
4f8e6ed8e976: Pulling fs layer
42d703dfe78c: Pulling fs layer
3e6bd4022c16: Pulling fs layer
1faa0020373d: Pulling fs layer
0dbbcabd03ab: Pulling fs layer
3f7572e2d709: Pulling fs layer
017bd45224e3: Pulling fs layer
14922c403bfb: Pulling fs layer
137985dae17e: Pulling fs layer
c13dd2a78d7b: Pulling fs layer
946b68838e6d: Pulling fs layer
8c9cccb372af: Pulling fs layer
11221cca7dfc: Pulling fs layer
1c462c293704: Pulling fs layer
d32630687f55: Waiting
137985dae17e: Waiting
1faa0020373d: Waiting
017bd45224e3: Waiting
c13dd2a78d7b: Waiting
14922c403bfb: Waiting
0dbbcabd03ab: Waiting
3f7572e2d709: Waiting
8c9cccb372af: Waiting
11221cca7dfc: Waiting
d8c355139ad8: Waiting
1c462c293704: Waiting
dfa7816fe874: Waiting
cf2983c386f5: Waiting
0a8e089113ba: Waiting
f1826a77298e: Waiting
8614918c0f65: Waiting
3e6bd4022c16: Waiting
4f8e6ed8e976: Waiting
e4862826b2aa: Verifying Checksum
e4862826b2aa: Download complete
d32630687f55: Verifying Checksum
d32630687f55: Download complete
ff65ddf9395b: Verifying Checksum
ff65ddf9395b: Download complete
dfa7816fe874: Verifying Checksum
dfa7816fe874: Download complete
0a8e089113ba: Verifying Checksum
0a8e089113ba: Download complete
cf2983c386f5: Verifying Checksum
cf2983c386f5: Download complete
d8c355139ad8: Verifying Checksum
d8c355139ad8: Download complete
ff65ddf9395b: Pull complete
046f509f9618: Verifying Checksum
046f509f9618: Download complete
cdd9d79692c1: Download complete
8614918c0f65: Verifying Checksum
8614918c0f65: Download complete
4f8e6ed8e976: Verifying Checksum
4f8e6ed8e976: Download complete
42d703dfe78c: Verifying Checksum
42d703dfe78c: Download complete
3e6bd4022c16: Verifying Checksum
3e6bd4022c16: Download complete
0dbbcabd03ab: Download complete
1faa0020373d: Download complete
3f7572e2d709: Verifying Checksum
3f7572e2d709: Download complete
017bd45224e3: Download complete
137985dae17e: Verifying Checksum
137985dae17e: Download complete
14922c403bfb: Verifying Checksum
14922c403bfb: Download complete
c13dd2a78d7b: Verifying Checksum
c13dd2a78d7b: Download complete
946b68838e6d: Verifying Checksum
946b68838e6d: Download complete
11221cca7dfc: Verifying Checksum
11221cca7dfc: Download complete
8c9cccb372af: Verifying Checksum
8c9cccb372af: Download complete
1c462c293704: Verifying Checksum
1c462c293704: Download complete
f1826a77298e: Verifying Checksum
f1826a77298e: Download complete
cdd9d79692c1: Pull complete
e4862826b2aa: Pull complete
d32630687f55: Pull complete
dfa7816fe874: Pull complete
0a8e089113ba: Pull complete
d8c355139ad8: Pull complete
cf2983c386f5: Pull complete
f1826a77298e: Pull complete
046f509f9618: Pull complete
8614918c0f65: Pull complete
4f8e6ed8e976: Pull complete
42d703dfe78c: Pull complete
3e6bd4022c16: Pull complete
1faa0020373d: Pull complete
0dbbcabd03ab: Pull complete
3f7572e2d709: Pull complete
017bd45224e3: Pull complete
14922c403bfb: Pull complete
137985dae17e: Pull complete
c13dd2a78d7b: Pull complete
946b68838e6d: Pull complete
8c9cccb372af: Pull complete
11221cca7dfc: Pull complete
1c462c293704: Pull complete
Digest: sha256:9bb9bd3cd602cf04a361ef46d2f32e514ed06ab856e0fb889349572ec028d89a
Status: Downloaded newer image for gcr.io/cloud-devrel-kokoro-resources/python-multi:latest
gcr.io/cloud-devrel-kokoro-resources/python-multi:latest
Executing: docker run --rm --interactive --network=host --privileged --volume=/var/run/docker.sock:/var/run/docker.sock --workdir=/tmpfs/src --entrypoint=github/python-aiplatform/.kokoro/build.sh --env-file=/tmpfs/tmp/tmpgpx3sooh/envfile --volume=/tmpfs:/tmpfs gcr.io/cloud-devrel-kokoro-resources/python-multi
KOKORO_KEYSTORE_DIR=/tmpfs/src/keystore
KOKORO_GITHUB_COMMIT_URL=https://github.com/googleapis/python-aiplatform/commit/deb6c9c4dbed88580327bcbae3d09fe9d1e92f5c
KOKORO_JOB_NAME=cloud-devrel/client-libraries/python/googleapis/python-aiplatform/continuous/system
KOKORO_GIT_COMMIT=deb6c9c4dbed88580327bcbae3d09fe9d1e92f5c
KOKORO_JOB_CLUSTER=GCP_UBUNTU
KOKORO_BLAZE_DIR=/tmpfs/src/objfs
KOKORO_ROOT=/tmpfs
KOKORO_JOB_TYPE=CONTINUOUS_INTEGRATION
KOKORO_ROOT_DIR=/tmpfs/
KOKORO_BUILD_NUMBER=3142
KOKORO_JOB_POOL=yoshi-ubuntu
KOKORO_GITHUB_COMMIT=deb6c9c4dbed88580327bcbae3d09fe9d1e92f5c
KOKORO_BUILD_INITIATOR=kokoro-github-subscriber
KOKORO_ARTIFACTS_DIR=/tmpfs/src
KOKORO_BUILD_ID=b524ad9c-d572-4867-ade1-452eddbf2bb0
KOKORO_GFILE_DIR=/tmpfs/src/gfile
KOKORO_BUILD_CONFIG_DIR=
KOKORO_POSIX_ROOT=/tmpfs
KOKORO_BUILD_ARTIFACTS_SUBDIR=prod/cloud-devrel/client-libraries/python/googleapis/python-aiplatform/continuous/system/3142/20241119-150623
WARNING: Skipping nox-automation as it is not installed.

[notice] A new release of pip is available: 23.0.1 -> 24.3.1
[notice] To update, run: pip install --upgrade pip
2024.10.9
nox > Running session system-3.10
nox > Creating virtual environment (virtualenv) using python3.10 in .nox/system-3-10
nox > python -m pip install --pre 'grpcio!=1.52.0rc1'
nox > python -m pip install mock pytest google-cloud-testutils -c /tmpfs/src/github/python-aiplatform/testing/constraints-3.10.txt
nox > python -m pip install -e '.[testing]' -c /tmpfs/src/github/python-aiplatform/testing/constraints-3.10.txt
nox > py.test -v --junitxml=system_3.10_sponge_log.xml tests/system
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pytest_asyncio/plugin.py:208: PytestDeprecationWarning: The configuration option "asyncio_default_fixture_loop_scope" is unset.
The event loop scope for asynchronous fixtures will default to the fixture caching scope. Future versions of pytest-asyncio will default the loop scope for asynchronous fixtures to function scope. Set the default fixture loop scope explicitly in order to avoid unexpected behavior in the future. Valid fixture loop scopes are: "function", "class", "module", "package", "session"

warnings.warn(PytestDeprecationWarning(_DEFAULT_FIXTURE_LOOP_SCOPE_UNSET))
============================= test session starts ==============================
platform linux -- Python 3.10.15, pytest-8.3.3, pluggy-1.5.0 -- /tmpfs/src/github/python-aiplatform/.nox/system-3-10/bin/python
cachedir: .pytest_cache
rootdir: /tmpfs/src/github/python-aiplatform
plugins: asyncio-0.24.0, xdist-3.3.1, anyio-3.7.1
asyncio: mode=strict, default_loop_scope=None
created: 16/16 workers
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pytest_asyncio/plugin.py:208: PytestDeprecationWarning: The configuration option "asyncio_default_fixture_loop_scope" is unset.
The event loop scope for asynchronous fixtures will default to the fixture caching scope. Future versions of pytest-asyncio will default the loop scope for asynchronous fixtures to function scope. Set the default fixture loop scope explicitly in order to avoid unexpected behavior in the future. Valid fixture loop scopes are: "function", "class", "module", "package", "session"

warnings.warn(PytestDeprecationWarning(_DEFAULT_FIXTURE_LOOP_SCOPE_UNSET))
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pytest_asyncio/plugin.py:208: PytestDeprecationWarning: The configuration option "asyncio_default_fixture_loop_scope" is unset.
The event loop scope for asynchronous fixtures will default to the fixture caching scope. Future versions of pytest-asyncio will default the loop scope for asynchronous fixtures to function scope. Set the default fixture loop scope explicitly in order to avoid unexpected behavior in the future. Valid fixture loop scopes are: "function", "class", "module", "package", "session"

warnings.warn(PytestDeprecationWarning(_DEFAULT_FIXTURE_LOOP_SCOPE_UNSET))
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pytest_asyncio/plugin.py:208: PytestDeprecationWarning: The configuration option "asyncio_default_fixture_loop_scope" is unset.
The event loop scope for asynchronous fixtures will default to the fixture caching scope. Future versions of pytest-asyncio will default the loop scope for asynchronous fixtures to function scope. Set the default fixture loop scope explicitly in order to avoid unexpected behavior in the future. Valid fixture loop scopes are: "function", "class", "module", "package", "session"

warnings.warn(PytestDeprecationWarning(_DEFAULT_FIXTURE_LOOP_SCOPE_UNSET))
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pytest_asyncio/plugin.py:208: PytestDeprecationWarning: The configuration option "asyncio_default_fixture_loop_scope" is unset.
The event loop scope for asynchronous fixtures will default to the fixture caching scope. Future versions of pytest-asyncio will default the loop scope for asynchronous fixtures to function scope. Set the default fixture loop scope explicitly in order to avoid unexpected behavior in the future. Valid fixture loop scopes are: "function", "class", "module", "package", "session"

warnings.warn(PytestDeprecationWarning(_DEFAULT_FIXTURE_LOOP_SCOPE_UNSET))
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pytest_asyncio/plugin.py:208: PytestDeprecationWarning: The configuration option "asyncio_default_fixture_loop_scope" is unset.
The event loop scope for asynchronous fixtures will default to the fixture caching scope. Future versions of pytest-asyncio will default the loop scope for asynchronous fixtures to function scope. Set the default fixture loop scope explicitly in order to avoid unexpected behavior in the future. Valid fixture loop scopes are: "function", "class", "module", "package", "session"

warnings.warn(PytestDeprecationWarning(_DEFAULT_FIXTURE_LOOP_SCOPE_UNSET))
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pytest_asyncio/plugin.py:208: PytestDeprecationWarning: The configuration option "asyncio_default_fixture_loop_scope" is unset.
The event loop scope for asynchronous fixtures will default to the fixture caching scope. Future versions of pytest-asyncio will default the loop scope for asynchronous fixtures to function scope. Set the default fixture loop scope explicitly in order to avoid unexpected behavior in the future. Valid fixture loop scopes are: "function", "class", "module", "package", "session"

warnings.warn(PytestDeprecationWarning(_DEFAULT_FIXTURE_LOOP_SCOPE_UNSET))
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pytest_asyncio/plugin.py:208: PytestDeprecationWarning: The configuration option "asyncio_default_fixture_loop_scope" is unset.
The event loop scope for asynchronous fixtures will default to the fixture caching scope. Future versions of pytest-asyncio will default the loop scope for asynchronous fixtures to function scope. Set the default fixture loop scope explicitly in order to avoid unexpected behavior in the future. Valid fixture loop scopes are: "function", "class", "module", "package", "session"

warnings.warn(PytestDeprecationWarning(_DEFAULT_FIXTURE_LOOP_SCOPE_UNSET))
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pytest_asyncio/plugin.py:208: PytestDeprecationWarning: The configuration option "asyncio_default_fixture_loop_scope" is unset.
The event loop scope for asynchronous fixtures will default to the fixture caching scope. Future versions of pytest-asyncio will default the loop scope for asynchronous fixtures to function scope. Set the default fixture loop scope explicitly in order to avoid unexpected behavior in the future. Valid fixture loop scopes are: "function", "class", "module", "package", "session"

warnings.warn(PytestDeprecationWarning(_DEFAULT_FIXTURE_LOOP_SCOPE_UNSET))
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pytest_asyncio/plugin.py:208: PytestDeprecationWarning: The configuration option "asyncio_default_fixture_loop_scope" is unset.
The event loop scope for asynchronous fixtures will default to the fixture caching scope. Future versions of pytest-asyncio will default the loop scope for asynchronous fixtures to function scope. Set the default fixture loop scope explicitly in order to avoid unexpected behavior in the future. Valid fixture loop scopes are: "function", "class", "module", "package", "session"

warnings.warn(PytestDeprecationWarning(_DEFAULT_FIXTURE_LOOP_SCOPE_UNSET))
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pytest_asyncio/plugin.py:208: PytestDeprecationWarning: The configuration option "asyncio_default_fixture_loop_scope" is unset.
The event loop scope for asynchronous fixtures will default to the fixture caching scope. Future versions of pytest-asyncio will default the loop scope for asynchronous fixtures to function scope. Set the default fixture loop scope explicitly in order to avoid unexpected behavior in the future. Valid fixture loop scopes are: "function", "class", "module", "package", "session"

warnings.warn(PytestDeprecationWarning(_DEFAULT_FIXTURE_LOOP_SCOPE_UNSET))
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pytest_asyncio/plugin.py:208: PytestDeprecationWarning: The configuration option "asyncio_default_fixture_loop_scope" is unset.
The event loop scope for asynchronous fixtures will default to the fixture caching scope. Future versions of pytest-asyncio will default the loop scope for asynchronous fixtures to function scope. Set the default fixture loop scope explicitly in order to avoid unexpected behavior in the future. Valid fixture loop scopes are: "function", "class", "module", "package", "session"

warnings.warn(PytestDeprecationWarning(_DEFAULT_FIXTURE_LOOP_SCOPE_UNSET))
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pytest_asyncio/plugin.py:208: PytestDeprecationWarning: The configuration option "asyncio_default_fixture_loop_scope" is unset.
The event loop scope for asynchronous fixtures will default to the fixture caching scope. Future versions of pytest-asyncio will default the loop scope for asynchronous fixtures to function scope. Set the default fixture loop scope explicitly in order to avoid unexpected behavior in the future. Valid fixture loop scopes are: "function", "class", "module", "package", "session"

warnings.warn(PytestDeprecationWarning(_DEFAULT_FIXTURE_LOOP_SCOPE_UNSET))
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pytest_asyncio/plugin.py:208: PytestDeprecationWarning: The configuration option "asyncio_default_fixture_loop_scope" is unset.
The event loop scope for asynchronous fixtures will default to the fixture caching scope. Future versions of pytest-asyncio will default the loop scope for asynchronous fixtures to function scope. Set the default fixture loop scope explicitly in order to avoid unexpected behavior in the future. Valid fixture loop scopes are: "function", "class", "module", "package", "session"

warnings.warn(PytestDeprecationWarning(_DEFAULT_FIXTURE_LOOP_SCOPE_UNSET))
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pytest_asyncio/plugin.py:208: PytestDeprecationWarning: The configuration option "asyncio_default_fixture_loop_scope" is unset.
The event loop scope for asynchronous fixtures will default to the fixture caching scope. Future versions of pytest-asyncio will default the loop scope for asynchronous fixtures to function scope. Set the default fixture loop scope explicitly in order to avoid unexpected behavior in the future. Valid fixture loop scopes are: "function", "class", "module", "package", "session"

warnings.warn(PytestDeprecationWarning(_DEFAULT_FIXTURE_LOOP_SCOPE_UNSET))
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pytest_asyncio/plugin.py:208: PytestDeprecationWarning: The configuration option "asyncio_default_fixture_loop_scope" is unset.
The event loop scope for asynchronous fixtures will default to the fixture caching scope. Future versions of pytest-asyncio will default the loop scope for asynchronous fixtures to function scope. Set the default fixture loop scope explicitly in order to avoid unexpected behavior in the future. Valid fixture loop scopes are: "function", "class", "module", "package", "session"

warnings.warn(PytestDeprecationWarning(_DEFAULT_FIXTURE_LOOP_SCOPE_UNSET))
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pytest_asyncio/plugin.py:208: PytestDeprecationWarning: The configuration option "asyncio_default_fixture_loop_scope" is unset.
The event loop scope for asynchronous fixtures will default to the fixture caching scope. Future versions of pytest-asyncio will default the loop scope for asynchronous fixtures to function scope. Set the default fixture loop scope explicitly in order to avoid unexpected behavior in the future. Valid fixture loop scopes are: "function", "class", "module", "package", "session"

warnings.warn(PytestDeprecationWarning(_DEFAULT_FIXTURE_LOOP_SCOPE_UNSET))
16 workers [244 items]

scheduling tests via LoadScopeScheduling

tests/system/aiplatform/test_custom_job.py::TestCustomJob::test_from_local_script_prebuilt_container
tests/system/aiplatform/test_autologging.py::TestAutologging::test_autologging_with_autorun_creation
tests/system/aiplatform/test_e2e_metadata_schema.py::TestMetadataSchema::test_system_dataset_artifact_create
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_text_generation[grpc]
tests/system/aiplatform/test_experiment_model.py::TestExperimentModel::test_sklearn_model
tests/system/aiplatform/test_initializer.py::TestInitializer::test_init_calls_set_google_auth_default
tests/system/aiplatform/test_experiments.py::TestExperiments::test_create_experiment
tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_create_get_list_featurestore
tests/system/aiplatform/test_matching_engine_index.py::TestMatchingEngine::test_create_get_list_matching_engine_index
tests/system/aiplatform/test_e2e_forecasting.py::TestEndToEndForecasting2::test_end_to_end_forecasting[SequenceToSequencePlusForecastingTrainingJob]
tests/system/aiplatform/test_e2e_forecasting.py::TestEndToEndForecasting3::test_end_to_end_forecasting[TemporalFusionTransformerForecastingTrainingJob]
tests/system/aiplatform/test_e2e_tabular.py::TestEndToEndTabular::test_end_to_end_tabular
tests/system/aiplatform/test_e2e_forecasting.py::TestEndToEndForecasting4::test_end_to_end_forecasting[TimeSeriesDenseEncoderForecastingTrainingJob]
tests/system/aiplatform/test_e2e_forecasting.py::TestEndToEndForecasting1::test_end_to_end_forecasting[AutoMLForecastingTrainingJob]
tests/system/aiplatform/test_batch_prediction.py::TestBatchPredictionJob::test_model_monitoring
[gw13] [ 0%] PASSED tests/system/aiplatform/test_initializer.py::TestInitializer::test_init_calls_set_google_auth_default
tests/system/aiplatform/test_dataset.py::TestDataset::test_get_existing_dataset
[gw2] [ 0%] SKIPPED tests/system/aiplatform/test_dataset.py::TestDataset::test_get_existing_dataset
tests/system/aiplatform/test_initializer.py::TestInitializer::test_init_rest_async_incorrect_credentials
tests/system/aiplatform/test_dataset.py::TestDataset::test_get_nonexistent_dataset
[gw13] [ 1%] PASSED tests/system/aiplatform/test_initializer.py::TestInitializer::test_init_rest_async_incorrect_credentials
tests/system/aiplatform/test_pipeline_job.py::TestPipelineJob::test_add_pipeline_job_to_experiment
[gw2] [ 1%] PASSED tests/system/aiplatform/test_dataset.py::TestDataset::test_get_nonexistent_dataset
tests/system/aiplatform/test_dataset.py::TestDataset::test_get_new_dataset_and_import
[gw8] [ 2%] PASSED tests/system/aiplatform/test_e2e_metadata_schema.py::TestMetadataSchema::test_system_dataset_artifact_create
tests/system/aiplatform/test_e2e_metadata_schema.py::TestMetadataSchema::test_google_dataset_artifact_create
[gw14] [ 2%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_text_generation[grpc]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_text_generation[rest]
[gw8] [ 2%] PASSED tests/system/aiplatform/test_e2e_metadata_schema.py::TestMetadataSchema::test_google_dataset_artifact_create
tests/system/aiplatform/test_e2e_metadata_schema.py::TestMetadataSchema::test_execution_create_using_system_schema_class
[gw8] [ 3%] PASSED tests/system/aiplatform/test_e2e_metadata_schema.py::TestMetadataSchema::test_execution_create_using_system_schema_class
[gw14] [ 3%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_text_generation[rest]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_text_generation_preview_count_tokens[grpc]
tests/system/aiplatform/test_project_id_inference.py::TestProjectIDInference::test_project_id_inference
[gw14] [ 4%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_text_generation_preview_count_tokens[grpc]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_text_generation_preview_count_tokens[rest]
[gw11] [ 4%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_create_experiment
tests/system/aiplatform/test_experiments.py::TestExperiments::test_get_experiment
[gw14] [ 4%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_text_generation_preview_count_tokens[rest]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_text_generation_model_predict_async
[gw11] [ 5%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_get_experiment
tests/system/aiplatform/test_experiments.py::TestExperiments::test_start_run
[gw10] [ 5%] PASSED tests/system/aiplatform/test_experiment_model.py::TestExperimentModel::test_sklearn_model
tests/system/aiplatform/test_experiment_model.py::TestExperimentModel::test_xgboost_booster_with_custom_uri
[gw14] [ 6%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_text_generation_model_predict_async
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_text_generation_streaming[grpc]
[gw14] [ 6%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_text_generation_streaming[grpc]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_text_generation_streaming[rest]
[gw11] [ 6%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_start_run
tests/system/aiplatform/test_experiments.py::TestExperiments::test_get_run
[gw14] [ 7%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_text_generation_streaming[rest]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_preview_text_generation_from_pretrained[grpc]
[gw10] [ 7%] PASSED tests/system/aiplatform/test_experiment_model.py::TestExperimentModel::test_xgboost_booster_with_custom_uri
tests/system/aiplatform/test_experiment_model.py::TestExperimentModel::test_xgboost_xgbmodel_with_custom_names
[gw11] [ 8%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_get_run
tests/system/aiplatform/test_experiments.py::TestExperiments::test_log_params
[gw14] [ 8%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_preview_text_generation_from_pretrained[grpc]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_preview_text_generation_from_pretrained[rest]
[gw14] [ 9%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_preview_text_generation_from_pretrained[rest]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_chat_on_chat_model[grpc]
[gw10] [ 9%] PASSED tests/system/aiplatform/test_experiment_model.py::TestExperimentModel::test_xgboost_xgbmodel_with_custom_names
tests/system/aiplatform/test_experiment_model.py::TestExperimentModel::test_tensorflow_keras_model_with_input_example
[gw14] [ 9%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_chat_on_chat_model[grpc]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_chat_on_chat_model[rest]
[gw11] [ 10%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_log_params
tests/system/aiplatform/test_experiments.py::TestExperiments::test_log_metrics
[gw0] [ 10%] FAILED tests/system/aiplatform/test_autologging.py::TestAutologging::test_autologging_with_autorun_creation
tests/system/aiplatform/test_autologging.py::TestAutologging::test_autologging_with_manual_run_creation
[gw14] [ 11%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_chat_on_chat_model[rest]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_chat_model_preview_count_tokens[grpc]
[gw11] [ 11%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_log_metrics
tests/system/aiplatform/test_experiments.py::TestExperiments::test_log_time_series_metrics
[gw14] [ 11%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_chat_model_preview_count_tokens[grpc]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_chat_model_preview_count_tokens[rest]
[gw14] [ 12%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_chat_model_preview_count_tokens[rest]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_chat_model_async
[gw14] [ 12%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_chat_model_async
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_chat_model_send_message_streaming[grpc]
[gw13] [ 13%] PASSED tests/system/aiplatform/test_pipeline_job.py::TestPipelineJob::test_add_pipeline_job_to_experiment
[gw14] [ 13%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_chat_model_send_message_streaming[grpc]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_chat_model_send_message_streaming[rest]
[gw11] [ 13%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_log_time_series_metrics
tests/system/aiplatform/test_experiments.py::TestExperiments::test_get_time_series_data_frame_batch_read_success
tests/system/aiplatform/test_prediction_cpr.py::TestPredictionCpr::test_build_cpr_model_upload_and_deploy
[gw14] [ 14%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_chat_model_send_message_streaming[rest]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_text_embedding[grpc]
[gw14] [ 14%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_text_embedding[grpc]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_text_embedding[rest]
[gw14] [ 15%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_text_embedding[rest]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_text_embedding_async
[gw14] [ 15%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_text_embedding_async
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_tuning[grpc]
[gw14] [ 15%] SKIPPED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_tuning[grpc]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_tuning[rest]
[gw14] [ 16%] SKIPPED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_tuning[rest]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_batch_prediction_for_text_generation[grpc]
[gw10] [ 16%] PASSED tests/system/aiplatform/test_experiment_model.py::TestExperimentModel::test_tensorflow_keras_model_with_input_example
tests/system/aiplatform/test_experiment_model.py::TestExperimentModel::test_tensorflow_module_with_gpu_container
[gw0] [ 17%] PASSED tests/system/aiplatform/test_autologging.py::TestAutologging::test_autologging_with_manual_run_creation
tests/system/aiplatform/test_autologging.py::TestAutologging::test_autologging_nested_run_model
[gw11] [ 17%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_get_time_series_data_frame_batch_read_success
tests/system/aiplatform/test_experiments.py::TestExperiments::test_log_classification_metrics
[gw0] [ 18%] PASSED tests/system/aiplatform/test_autologging.py::TestAutologging::test_autologging_nested_run_model
[gw10] [ 18%] PASSED tests/system/aiplatform/test_experiment_model.py::TestExperimentModel::test_tensorflow_module_with_gpu_container
tests/system/aiplatform/test_experiment_model.py::TestExperimentModel::test_deploy_model_with_cpu_container
[gw11] [ 18%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_log_classification_metrics
tests/system/aiplatform/test_experiments.py::TestExperiments::test_log_model
[gw11] [ 19%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_log_model
tests/system/aiplatform/test_experiments.py::TestExperiments::test_create_artifact
[gw11] [ 19%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_create_artifact
tests/system/aiplatform/test_experiments.py::TestExperiments::test_get_artifact_by_uri
[gw11] [ 20%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_get_artifact_by_uri
tests/system/aiplatform/test_experiments.py::TestExperiments::test_log_execution_and_artifact
tests/system/aiplatform/test_tensorboard.py::TestTensorboard::test_create_and_get_tensorboard
[gw14] [ 20%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_batch_prediction_for_text_generation[grpc]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_batch_prediction_for_text_generation[rest]
[gw0] [ 20%] PASSED tests/system/aiplatform/test_tensorboard.py::TestTensorboard::test_create_and_get_tensorboard
tests/system/aiplatform/test_tensorboard.py::TestTensorboard::test_create_and_get_tensorboard_experiment
[gw0] [ 21%] PASSED tests/system/aiplatform/test_tensorboard.py::TestTensorboard::test_create_and_get_tensorboard_experiment
tests/system/aiplatform/test_tensorboard.py::TestTensorboard::test_create_and_get_tensorboard_run
[gw0] [ 21%] PASSED tests/system/aiplatform/test_tensorboard.py::TestTensorboard::test_create_and_get_tensorboard_run
tests/system/aiplatform/test_tensorboard.py::TestTensorboard::test_create_and_get_tensorboard_time_series
[gw0] [ 22%] PASSED tests/system/aiplatform/test_tensorboard.py::TestTensorboard::test_create_and_get_tensorboard_time_series
tests/system/aiplatform/test_tensorboard.py::TestTensorboard::test_write_tensorboard_scalar_data
[gw0] [ 22%] PASSED tests/system/aiplatform/test_tensorboard.py::TestTensorboard::test_write_tensorboard_scalar_data
tests/system/vertex_ray/test_job_submission_dashboard.py::TestJobSubmissionDashboard::test_job_submission_dashboard[2.9]
[gw11] [ 22%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_log_execution_and_artifact
tests/system/aiplatform/test_experiments.py::TestExperiments::test_end_run
[gw14] [ 23%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_batch_prediction_for_text_generation[rest]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_batch_prediction_for_textembedding[grpc]
[gw11] [ 23%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_end_run
tests/system/aiplatform/test_experiments.py::TestExperiments::test_run_context_manager
[gw11] [ 24%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_run_context_manager
tests/system/aiplatform/test_experiments.py::TestExperiments::test_add_pipeline_job_to_experiment
[gw14] [ 24%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_batch_prediction_for_textembedding[grpc]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_batch_prediction_for_textembedding[rest]
[gw13] [ 25%] FAILED tests/system/aiplatform/test_prediction_cpr.py::TestPredictionCpr::test_build_cpr_model_upload_and_deploy
tests/system/aiplatform/test_private_endpoint.py::TestPrivateEndpoint::test_create_deploy_delete_private_endpoint
[gw14] [ 25%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_batch_prediction_for_textembedding[rest]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_batch_prediction_for_code_generation[grpc]
[gw14] [ 25%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_batch_prediction_for_code_generation[grpc]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_batch_prediction_for_code_generation[rest]
[gw11] [ 26%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_add_pipeline_job_to_experiment
tests/system/aiplatform/test_experiments.py::TestExperiments::test_get_experiments_df
[gw8] [ 26%] PASSED tests/system/aiplatform/test_project_id_inference.py::TestProjectIDInference::test_project_id_inference
tests/system/aiplatform/test_telemetry.py::TestTelemetry::test_single_context_manager
[gw11] [ 27%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_get_experiments_df
tests/system/aiplatform/test_experiments.py::TestExperiments::test_get_experiments_df_include_time_series_false
[gw8] [ 27%] PASSED tests/system/aiplatform/test_telemetry.py::TestTelemetry::test_single_context_manager
tests/system/aiplatform/test_telemetry.py::TestTelemetry::test_nested_context_manager
[gw11] [ 27%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_get_experiments_df_include_time_series_false
tests/system/aiplatform/test_experiments.py::TestExperiments::test_delete_run_does_not_exist_raises_exception
[gw14] [ 28%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_batch_prediction_for_code_generation[rest]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_code_generation_streaming[grpc]
[gw8] [ 28%] PASSED tests/system/aiplatform/test_telemetry.py::TestTelemetry::test_nested_context_manager
tests/system/vertexai/test_batch_prediction.py::TestBatchPrediction::test_batch_prediction_with_gcs_input
[gw14] [ 29%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_code_generation_streaming[grpc]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_code_generation_streaming[rest]
[gw11] [ 29%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_delete_run_does_not_exist_raises_exception
tests/system/aiplatform/test_experiments.py::TestExperiments::test_delete_run_success
[gw14] [ 29%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_code_generation_streaming[rest]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_code_chat_model_send_message_streaming[grpc]
[gw14] [ 30%] FAILED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_code_chat_model_send_message_streaming[grpc]
tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_code_chat_model_send_message_streaming[rest]
[gw11] [ 30%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_delete_run_success
tests/system/aiplatform/test_experiments.py::TestExperiments::test_reuse_run_success
[gw14] [ 31%] PASSED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_code_chat_model_send_message_streaming[rest]
tests/system/vertexai/test_reasoning_engines.py::TestReasoningEngines::test_langchain_template
[gw11] [ 31%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_reuse_run_success
tests/system/aiplatform/test_experiments.py::TestExperiments::test_delete_run_then_tensorboard_success
[gw11] [ 31%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_delete_run_then_tensorboard_success
tests/system/aiplatform/test_experiments.py::TestExperiments::test_delete_wout_backing_tensorboard_reuse_run_raises_exception
[gw11] [ 32%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_delete_wout_backing_tensorboard_reuse_run_raises_exception
tests/system/aiplatform/test_experiments.py::TestExperiments::test_delete_experiment_does_not_exist_raises_exception
[gw11] [ 32%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_delete_experiment_does_not_exist_raises_exception
tests/system/aiplatform/test_experiments.py::TestExperiments::test_init_associates_global_tensorboard_to_experiment
[gw11] [ 33%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_init_associates_global_tensorboard_to_experiment
tests/system/aiplatform/test_experiments.py::TestExperiments::test_get_backing_tensorboard_resource_returns_tensorboard
[gw11] [ 33%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_get_backing_tensorboard_resource_returns_tensorboard
tests/system/aiplatform/test_experiments.py::TestExperiments::test_get_backing_tensorboard_resource_returns_none
[gw11] [ 34%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_get_backing_tensorboard_resource_returns_none
tests/system/aiplatform/test_experiments.py::TestExperiments::test_delete_backing_tensorboard_experiment_run_success
[gw11] [ 34%] PASSED tests/system/aiplatform/test_experiments.py::TestExperiments::test_delete_backing_tensorboard_experiment_run_success
[gw10] [ 34%] PASSED tests/system/aiplatform/test_experiment_model.py::TestExperimentModel::test_deploy_model_with_cpu_container
tests/system/aiplatform/test_experiment_model.py::TestExperimentModel::test_deploy_model_with_gpu_container
[gw1] [ 35%] PASSED tests/system/aiplatform/test_custom_job.py::TestCustomJob::test_from_local_script_prebuilt_container
tests/system/aiplatform/test_custom_job.py::TestCustomJob::test_from_local_script_custom_container
[gw12] [ 35%] PASSED tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_create_get_list_featurestore
tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_create_get_list_entity_types
[gw12] [ 36%] PASSED tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_create_get_list_entity_types
tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_create_get_list_features
[gw12] [ 36%] PASSED tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_create_get_list_features
tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_ingest_feature_values
[gw8] [ 36%] PASSED tests/system/vertexai/test_batch_prediction.py::TestBatchPrediction::test_batch_prediction_with_gcs_input
tests/system/vertexai/test_batch_prediction.py::TestBatchPrediction::test_batch_prediction_with_bq_input
[gw14] [ 37%] PASSED tests/system/vertexai/test_reasoning_engines.py::TestReasoningEngines::test_langchain_template
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_local[_get_tokenizer_for_model_preview-gemini-1.0-pro-udhr-udhr-PROD_ENDPOINT]
[gw2] [ 37%] PASSED tests/system/aiplatform/test_dataset.py::TestDataset::test_get_new_dataset_and_import
tests/system/aiplatform/test_dataset.py::TestDataset::test_create_and_import_image_dataset
[gw14] [ 38%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_local[_get_tokenizer_for_model_preview-gemini-1.0-pro-udhr-udhr-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_local[_get_tokenizer_for_model_preview-gemini-1.5-pro-udhr-udhr-PROD_ENDPOINT]
[gw0] [ 38%] FAILED tests/system/vertex_ray/test_job_submission_dashboard.py::TestJobSubmissionDashboard::test_job_submission_dashboard[2.9]
tests/system/vertex_ray/test_job_submission_dashboard.py::TestJobSubmissionDashboard::test_job_submission_dashboard[2.33]
[gw1] [ 38%] PASSED tests/system/aiplatform/test_custom_job.py::TestCustomJob::test_from_local_script_custom_container
tests/system/aiplatform/test_custom_job.py::TestCustomJob::test_from_local_script_enable_autolog_prebuilt_container
[gw14] [ 39%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_local[_get_tokenizer_for_model_preview-gemini-1.5-pro-udhr-udhr-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_local[_get_tokenizer_for_model_preview-gemini-1.5-flash-udhr-udhr-PROD_ENDPOINT]
[gw14] [ 39%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_local[_get_tokenizer_for_model_preview-gemini-1.5-flash-udhr-udhr-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_local[_get_tokenizer_for_model_preview-gemini-1.5-flash-002-udhr-udhr-PROD_ENDPOINT]
[gw10] [ 40%] PASSED tests/system/aiplatform/test_experiment_model.py::TestExperimentModel::test_deploy_model_with_gpu_container
[gw8] [ 40%] PASSED tests/system/vertexai/test_batch_prediction.py::TestBatchPrediction::test_batch_prediction_with_bq_input
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_with_cached_content_from_text[grpc-PROD_ENDPOINT]
tests/system/vertex_ray/test_cluster_management.py::TestClusterManagement::test_cluster_management[2.9]
[gw14] [ 40%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_local[_get_tokenizer_for_model_preview-gemini-1.5-flash-002-udhr-udhr-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_local[_get_tokenizer_for_model_preview-gemini-1.5-pro-002-udhr-udhr-PROD_ENDPOINT]
[gw14] [ 41%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_local[_get_tokenizer_for_model_preview-gemini-1.5-pro-002-udhr-udhr-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_local[get_tokenizer_for_model-gemini-1.0-pro-udhr-udhr-PROD_ENDPOINT]
[gw8] [ 41%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_with_cached_content_from_text[grpc-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_with_cached_content_from_text[rest-PROD_ENDPOINT]
[gw14] [ 42%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_local[get_tokenizer_for_model-gemini-1.0-pro-udhr-udhr-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_local[get_tokenizer_for_model-gemini-1.5-pro-udhr-udhr-PROD_ENDPOINT]
[gw14] [ 42%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_local[get_tokenizer_for_model-gemini-1.5-pro-udhr-udhr-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_local[get_tokenizer_for_model-gemini-1.5-flash-udhr-udhr-PROD_ENDPOINT]
[gw8] [ 43%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_with_cached_content_from_text[rest-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_text[grpc-PROD_ENDPOINT]
[gw8] [ 43%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_text[grpc-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_text[rest-PROD_ENDPOINT]
[gw8] [ 43%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_text[rest-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_latency[grpc-PROD_ENDPOINT]
[gw8] [ 44%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_latency[grpc-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_latency[rest-PROD_ENDPOINT]
[gw8] [ 44%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_latency[rest-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_async[grpc-PROD_ENDPOINT]
[gw14] [ 45%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_local[get_tokenizer_for_model-gemini-1.5-flash-udhr-udhr-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_local[get_tokenizer_for_model-gemini-1.5-flash-002-udhr-udhr-PROD_ENDPOINT]
[gw8] [ 45%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_async[grpc-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_async[rest-PROD_ENDPOINT]
[gw12] [ 45%] PASSED tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_ingest_feature_values
tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_batch_create_features
[gw8] [ 46%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_async[rest-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_streaming[grpc-PROD_ENDPOINT]
[gw12] [ 46%] PASSED tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_batch_create_features
tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_ingest_feature_values_from_df_using_feature_time_column_and_online_read_multiple_entities
[gw8] [ 47%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_streaming[grpc-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_streaming[rest-PROD_ENDPOINT]
[gw8] [ 47%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_streaming[rest-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_streaming_async[grpc-PROD_ENDPOINT]
[gw8] [ 47%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_streaming_async[grpc-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_streaming_async[rest-PROD_ENDPOINT]
[gw8] [ 48%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_streaming_async[rest-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_with_parameters[grpc-PROD_ENDPOINT]
[gw8] [ 48%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_with_parameters[grpc-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_with_parameters[rest-PROD_ENDPOINT]
[gw8] [ 49%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_with_parameters[rest-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_with_gemini_15_parameters[grpc-PROD_ENDPOINT]
[gw14] [ 49%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_local[get_tokenizer_for_model-gemini-1.5-flash-002-udhr-udhr-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_local[get_tokenizer_for_model-gemini-1.5-pro-002-udhr-udhr-PROD_ENDPOINT]
[gw8] [ 50%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_with_gemini_15_parameters[grpc-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_with_gemini_15_parameters[rest-PROD_ENDPOINT]
[gw8] [ 50%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_with_gemini_15_parameters[rest-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_list_of_content_dict[grpc-PROD_ENDPOINT]
[gw8] [ 50%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_list_of_content_dict[grpc-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_list_of_content_dict[rest-PROD_ENDPOINT]
[gw8] [ 51%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_list_of_content_dict[rest-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_remote_image[grpc-PROD_ENDPOINT]
[gw8] [ 51%] SKIPPED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_remote_image[grpc-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_remote_image[rest-PROD_ENDPOINT]
[gw8] [ 52%] SKIPPED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_remote_image[rest-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_text_and_remote_image[grpc-PROD_ENDPOINT]
[gw8] [ 52%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_text_and_remote_image[grpc-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_text_and_remote_image[rest-PROD_ENDPOINT]
[gw8] [ 52%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_text_and_remote_image[rest-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_text_and_remote_video[grpc-PROD_ENDPOINT]
[gw8] [ 53%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_text_and_remote_video[grpc-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_text_and_remote_video[rest-PROD_ENDPOINT]
[gw14] [ 53%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_local[get_tokenizer_for_model-gemini-1.5-pro-002-udhr-udhr-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_compute_tokens[_get_tokenizer_for_model_preview-gemini-1.0-pro-udhr-udhr-PROD_ENDPOINT]
[gw8] [ 54%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_text_and_remote_video[rest-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_text_and_remote_audio[grpc-PROD_ENDPOINT]
[gw8] [ 54%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_text_and_remote_audio[grpc-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_text_and_remote_audio[rest-PROD_ENDPOINT]
[gw8] [ 54%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_from_text_and_remote_audio[rest-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_grounding_google_search_retriever[grpc-PROD_ENDPOINT]
[gw8] [ 55%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_grounding_google_search_retriever[grpc-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_grounding_google_search_retriever[rest-PROD_ENDPOINT]
[gw8] [ 55%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_grounding_google_search_retriever[rest-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_grounding_google_search_retriever_with_dynamic_retrieval[grpc-PROD_ENDPOINT]
[gw14] [ 56%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_compute_tokens[_get_tokenizer_for_model_preview-gemini-1.0-pro-udhr-udhr-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_compute_tokens[_get_tokenizer_for_model_preview-gemini-1.5-pro-udhr-udhr-PROD_ENDPOINT]
[gw8] [ 56%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_grounding_google_search_retriever_with_dynamic_retrieval[grpc-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_grounding_google_search_retriever_with_dynamic_retrieval[rest-PROD_ENDPOINT]
[gw8] [ 56%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_grounding_google_search_retriever_with_dynamic_retrieval[rest-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_send_message_from_text[grpc-PROD_ENDPOINT]
[gw8] [ 57%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_send_message_from_text[grpc-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_send_message_from_text[rest-PROD_ENDPOINT]
[gw8] [ 57%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_send_message_from_text[rest-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_chat_function_calling[grpc-PROD_ENDPOINT]
[gw8] [ 58%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_chat_function_calling[grpc-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_chat_function_calling[rest-PROD_ENDPOINT]
[gw8] [ 58%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_chat_function_calling[rest-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_function_calling[grpc-PROD_ENDPOINT]
[gw8] [ 59%] FAILED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_function_calling[grpc-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_function_calling[rest-PROD_ENDPOINT]
[gw8] [ 59%] FAILED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_function_calling[rest-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_model_router[grpc-PROD_ENDPOINT]
[gw8] [ 59%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_model_router[grpc-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_model_router[rest-PROD_ENDPOINT]
[gw8] [ 60%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_model_router[rest-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_chat_automatic_function_calling[grpc-PROD_ENDPOINT]
[gw8] [ 60%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_chat_automatic_function_calling[grpc-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_chat_automatic_function_calling[rest-PROD_ENDPOINT]
[gw8] [ 61%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_chat_automatic_function_calling[rest-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_additional_request_metadata[grpc-PROD_ENDPOINT]
[gw8] [ 61%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_additional_request_metadata[grpc-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_additional_request_metadata[rest-PROD_ENDPOINT]
[gw8] [ 61%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_additional_request_metadata[rest-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_compute_tokens_from_text[grpc-PROD_ENDPOINT]
[gw8] [ 62%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_compute_tokens_from_text[grpc-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_compute_tokens_from_text[rest-PROD_ENDPOINT]
[gw8] [ 62%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_compute_tokens_from_text[rest-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_count_tokens_from_text[grpc-PROD_ENDPOINT]
[gw14] [ 63%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_compute_tokens[_get_tokenizer_for_model_preview-gemini-1.5-pro-udhr-udhr-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_compute_tokens[_get_tokenizer_for_model_preview-gemini-1.5-flash-udhr-udhr-PROD_ENDPOINT]
[gw8] [ 63%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_count_tokens_from_text[grpc-PROD_ENDPOINT]
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_count_tokens_from_text[rest-PROD_ENDPOINT]
[gw8] [ 63%] PASSED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_count_tokens_from_text[rest-PROD_ENDPOINT]
[gw14] [ 64%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_compute_tokens[_get_tokenizer_for_model_preview-gemini-1.5-flash-udhr-udhr-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_compute_tokens[_get_tokenizer_for_model_preview-gemini-1.5-flash-002-udhr-udhr-PROD_ENDPOINT]
[gw14] [ 64%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_compute_tokens[_get_tokenizer_for_model_preview-gemini-1.5-flash-002-udhr-udhr-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_compute_tokens[_get_tokenizer_for_model_preview-gemini-1.5-pro-002-udhr-udhr-PROD_ENDPOINT]
[gw1] [ 65%] PASSED tests/system/aiplatform/test_custom_job.py::TestCustomJob::test_from_local_script_enable_autolog_prebuilt_container
tests/system/aiplatform/test_custom_job.py::TestCustomJob::test_from_local_script_enable_autolog_custom_container
[gw14] [ 65%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_compute_tokens[_get_tokenizer_for_model_preview-gemini-1.5-pro-002-udhr-udhr-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_compute_tokens[get_tokenizer_for_model-gemini-1.0-pro-udhr-udhr-PROD_ENDPOINT]
[gw14] [ 65%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_compute_tokens[get_tokenizer_for_model-gemini-1.0-pro-udhr-udhr-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_compute_tokens[get_tokenizer_for_model-gemini-1.5-pro-udhr-udhr-PROD_ENDPOINT]
[gw14] [ 66%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_compute_tokens[get_tokenizer_for_model-gemini-1.5-pro-udhr-udhr-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_compute_tokens[get_tokenizer_for_model-gemini-1.5-flash-udhr-udhr-PROD_ENDPOINT]
[gw14] [ 66%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_compute_tokens[get_tokenizer_for_model-gemini-1.5-flash-udhr-udhr-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_compute_tokens[get_tokenizer_for_model-gemini-1.5-flash-002-udhr-udhr-PROD_ENDPOINT]
[gw1] [ 67%] PASSED tests/system/aiplatform/test_custom_job.py::TestCustomJob::test_from_local_script_enable_autolog_custom_container
[gw12] [ 67%] PASSED tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_ingest_feature_values_from_df_using_feature_time_column_and_online_read_multiple_entities
tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_ingest_feature_values_from_df_using_feature_time_datetime_and_online_read_single_entity
[gw14] [ 68%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_compute_tokens[get_tokenizer_for_model-gemini-1.5-flash-002-udhr-udhr-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_compute_tokens[get_tokenizer_for_model-gemini-1.5-pro-002-udhr-udhr-PROD_ENDPOINT]
[gw2] [ 68%] PASSED tests/system/aiplatform/test_dataset.py::TestDataset::test_create_and_import_image_dataset
tests/system/aiplatform/test_dataset.py::TestDataset::test_create_tabular_dataset
[gw2] [ 68%] PASSED tests/system/aiplatform/test_dataset.py::TestDataset::test_create_tabular_dataset
tests/system/aiplatform/test_dataset.py::TestDataset::test_create_tabular_dataset_from_dataframe
[gw2] [ 69%] PASSED tests/system/aiplatform/test_dataset.py::TestDataset::test_create_tabular_dataset_from_dataframe
tests/system/aiplatform/test_dataset.py::TestDataset::test_create_tabular_dataset_from_dataframe_with_provided_schema
[gw2] [ 69%] PASSED tests/system/aiplatform/test_dataset.py::TestDataset::test_create_tabular_dataset_from_dataframe_with_provided_schema
tests/system/aiplatform/test_dataset.py::TestDataset::test_create_time_series_dataset
[gw2] [ 70%] PASSED tests/system/aiplatform/test_dataset.py::TestDataset::test_create_time_series_dataset
tests/system/aiplatform/test_dataset.py::TestDataset::test_export_data
[gw2] [ 70%] PASSED tests/system/aiplatform/test_dataset.py::TestDataset::test_export_data
tests/system/aiplatform/test_dataset.py::TestDataset::test_export_data_for_custom_training
[gw2] [ 70%] PASSED tests/system/aiplatform/test_dataset.py::TestDataset::test_export_data_for_custom_training
tests/system/aiplatform/test_dataset.py::TestDataset::test_update_dataset
[gw2] [ 71%] PASSED tests/system/aiplatform/test_dataset.py::TestDataset::test_update_dataset
[gw14] [ 71%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_compute_tokens[get_tokenizer_for_model-gemini-1.5-pro-002-udhr-udhr-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction[gemini-1.0-pro-PROD_ENDPOINT]
[gw14] [ 72%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction[gemini-1.0-pro-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction[gemini-1.5-pro-PROD_ENDPOINT]
[gw14] [ 72%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction[gemini-1.5-pro-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction[gemini-1.5-flash-PROD_ENDPOINT]
[gw14] [ 72%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction[gemini-1.5-flash-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction[gemini-1.5-flash-002-PROD_ENDPOINT]
[gw14] [ 73%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction[gemini-1.5-flash-002-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction[gemini-1.5-pro-002-PROD_ENDPOINT]
[gw14] [ 73%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction[gemini-1.5-pro-002-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction_is_function_call[gemini-1.0-pro-PROD_ENDPOINT]
[gw14] [ 74%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction_is_function_call[gemini-1.0-pro-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction_is_function_call[gemini-1.5-pro-PROD_ENDPOINT]
[gw14] [ 74%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction_is_function_call[gemini-1.5-pro-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction_is_function_call[gemini-1.5-flash-PROD_ENDPOINT]
[gw14] [ 75%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction_is_function_call[gemini-1.5-flash-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction_is_function_call[gemini-1.5-flash-002-PROD_ENDPOINT]
[gw14] [ 75%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction_is_function_call[gemini-1.5-flash-002-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction_is_function_call[gemini-1.5-pro-002-PROD_ENDPOINT]
[gw14] [ 75%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction_is_function_call[gemini-1.5-pro-002-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction_is_function_response[gemini-1.0-pro-PROD_ENDPOINT]
[gw14] [ 76%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction_is_function_response[gemini-1.0-pro-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction_is_function_response[gemini-1.5-pro-PROD_ENDPOINT]
[gw14] [ 76%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction_is_function_response[gemini-1.5-pro-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction_is_function_response[gemini-1.5-flash-PROD_ENDPOINT]
[gw14] [ 77%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction_is_function_response[gemini-1.5-flash-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction_is_function_response[gemini-1.5-flash-002-PROD_ENDPOINT]
[gw14] [ 77%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction_is_function_response[gemini-1.5-flash-002-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction_is_function_response[gemini-1.5-pro-002-PROD_ENDPOINT]
[gw14] [ 77%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_system_instruction_is_function_response[gemini-1.5-pro-002-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_tool_is_function_declaration[gemini-1.0-pro-PROD_ENDPOINT]
[gw14] [ 78%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_tool_is_function_declaration[gemini-1.0-pro-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_tool_is_function_declaration[gemini-1.5-pro-PROD_ENDPOINT]
[gw14] [ 78%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_tool_is_function_declaration[gemini-1.5-pro-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_tool_is_function_declaration[gemini-1.5-flash-PROD_ENDPOINT]
[gw14] [ 79%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_tool_is_function_declaration[gemini-1.5-flash-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_tool_is_function_declaration[gemini-1.5-flash-002-PROD_ENDPOINT]
[gw14] [ 79%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_tool_is_function_declaration[gemini-1.5-flash-002-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_tool_is_function_declaration[gemini-1.5-pro-002-PROD_ENDPOINT]
[gw14] [ 79%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_tool_is_function_declaration[gemini-1.5-pro-002-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_content_is_function_call[gemini-1.0-pro-PROD_ENDPOINT]
[gw14] [ 80%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_content_is_function_call[gemini-1.0-pro-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_content_is_function_call[gemini-1.5-pro-PROD_ENDPOINT]
[gw14] [ 80%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_content_is_function_call[gemini-1.5-pro-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_content_is_function_call[gemini-1.5-flash-PROD_ENDPOINT]
[gw14] [ 81%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_content_is_function_call[gemini-1.5-flash-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_content_is_function_call[gemini-1.5-flash-002-PROD_ENDPOINT]
[gw14] [ 81%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_content_is_function_call[gemini-1.5-flash-002-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_content_is_function_call[gemini-1.5-pro-002-PROD_ENDPOINT]
[gw14] [ 81%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_content_is_function_call[gemini-1.5-pro-002-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_content_is_function_response[gemini-1.0-pro-PROD_ENDPOINT]
[gw14] [ 82%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_content_is_function_response[gemini-1.0-pro-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_content_is_function_response[gemini-1.5-pro-PROD_ENDPOINT]
[gw14] [ 82%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_content_is_function_response[gemini-1.5-pro-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_content_is_function_response[gemini-1.5-flash-PROD_ENDPOINT]
[gw14] [ 83%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_content_is_function_response[gemini-1.5-flash-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_content_is_function_response[gemini-1.5-flash-002-PROD_ENDPOINT]
[gw14] [ 83%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_content_is_function_response[gemini-1.5-flash-002-PROD_ENDPOINT]
tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_content_is_function_response[gemini-1.5-pro-002-PROD_ENDPOINT]
[gw14] [ 84%] PASSED tests/system/vertexai/test_tokenization.py::TestTokenization::test_count_tokens_content_is_function_response[gemini-1.5-pro-002-PROD_ENDPOINT]
[gw10] [ 84%] PASSED tests/system/vertex_ray/test_cluster_management.py::TestClusterManagement::test_cluster_management[2.9]
tests/system/vertex_ray/test_cluster_management.py::TestClusterManagement::test_cluster_management[2.33]
[gw0] [ 84%] PASSED tests/system/vertex_ray/test_job_submission_dashboard.py::TestJobSubmissionDashboard::test_job_submission_dashboard[2.33]
tests/system/vertex_ray/test_ray_data.py::TestRayData::test_ray_data[2.9]
[gw12] [ 85%] PASSED tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_ingest_feature_values_from_df_using_feature_time_datetime_and_online_read_single_entity
tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_write_features
[gw12] [ 85%] PASSED tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_write_features
tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_search_features
[gw12] [ 86%] PASSED tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_search_features
tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_batch_serve_to_df
[gw12] [ 86%] PASSED tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_batch_serve_to_df
tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_batch_serve_to_gcs
[gw10] [ 86%] FAILED tests/system/vertex_ray/test_cluster_management.py::TestClusterManagement::test_cluster_management[2.33]
[gw0] [ 87%] PASSED tests/system/vertex_ray/test_ray_data.py::TestRayData::test_ray_data[2.9]
tests/system/vertex_ray/test_ray_data.py::TestRayData::test_ray_data[2.33]
[gw12] [ 87%] PASSED tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_batch_serve_to_gcs
tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_batch_serve_to_bq
[gw12] [ 88%] PASSED tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_batch_serve_to_bq
tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_online_reads
[gw12] [ 88%] PASSED tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_online_reads
[gw0] [ 88%] PASSED tests/system/vertex_ray/test_ray_data.py::TestRayData::test_ray_data[2.33]
[gw13] [ 89%] FAILED tests/system/aiplatform/test_private_endpoint.py::TestPrivateEndpoint::test_create_deploy_delete_private_endpoint
tests/system/aiplatform/test_vizier.py::TestVizier::test_vizier_lifecycle
[gw13] [ 89%] PASSED tests/system/aiplatform/test_vizier.py::TestVizier::test_vizier_lifecycle
tests/system/aiplatform/test_vizier.py::TestVizier::test_vizier_study_deletion
[gw13] [ 90%] PASSED tests/system/aiplatform/test_vizier.py::TestVizier::test_vizier_study_deletion
tests/system/aiplatform/test_vizier.py::TestVizier::test_vizier_trial_deletion
[gw13] [ 90%] PASSED tests/system/aiplatform/test_vizier.py::TestVizier::test_vizier_trial_deletion
[gw3] [ 90%] FAILED tests/system/aiplatform/test_batch_prediction.py::TestBatchPredictionJob::test_model_monitoring
tests/system/aiplatform/test_model_evaluation.py::TestModelEvaluationJob::test_model_evaluate_custom_tabular_model
[gw9] [ 91%] FAILED tests/system/aiplatform/test_e2e_tabular.py::TestEndToEndTabular::test_end_to_end_tabular
[gw15] [ 91%] PASSED tests/system/aiplatform/test_matching_engine_index.py::TestMatchingEngine::test_create_get_list_matching_engine_index
tests/system/aiplatform/test_matching_engine_index.py::TestMatchingEngine::test_matching_engine_stream_index
[gw3] [ 92%] PASSED tests/system/aiplatform/test_model_evaluation.py::TestModelEvaluationJob::test_model_evaluate_custom_tabular_model
[gw6] [ 92%] FAILED tests/system/aiplatform/test_e2e_forecasting.py::TestEndToEndForecasting3::test_end_to_end_forecasting[TemporalFusionTransformerForecastingTrainingJob]
tests/system/aiplatform/test_model_upload.py::TestModelUploadAndUpdate::test_upload_and_deploy_xgboost_model
[gw7] [ 93%] FAILED tests/system/aiplatform/test_e2e_forecasting.py::TestEndToEndForecasting4::test_end_to_end_forecasting[TimeSeriesDenseEncoderForecastingTrainingJob]
tests/system/aiplatform/test_model_version_management.py::TestVersionManagement::test_upload_deploy_manage_versioned_model
[gw7] [ 93%] PASSED tests/system/aiplatform/test_model_version_management.py::TestVersionManagement::test_upload_deploy_manage_versioned_model
[gw4] [ 93%] FAILED tests/system/aiplatform/test_e2e_forecasting.py::TestEndToEndForecasting1::test_end_to_end_forecasting[AutoMLForecastingTrainingJob]
tests/system/aiplatform/test_model_interactions.py::TestModelInteractions::test_prediction
[gw4] [ 94%] PASSED tests/system/aiplatform/test_model_interactions.py::TestModelInteractions::test_prediction
tests/system/aiplatform/test_model_interactions.py::TestModelInteractions::test_endpoint_predict_async
[gw4] [ 94%] PASSED tests/system/aiplatform/test_model_interactions.py::TestModelInteractions::test_endpoint_predict_async
[gw5] [ 95%] FAILED tests/system/aiplatform/test_e2e_forecasting.py::TestEndToEndForecasting2::test_end_to_end_forecasting[SequenceToSequencePlusForecastingTrainingJob]
tests/system/aiplatform/test_model_monitoring.py::TestModelDeploymentMonitoring::test_create_endpoint
tests/system/aiplatform/test_persistent_resource.py::TestPersistentResource::test_create_persistent_resource
[gw5] [ 95%] PASSED tests/system/aiplatform/test_model_monitoring.py::TestModelDeploymentMonitoring::test_create_endpoint
tests/system/aiplatform/test_model_monitoring.py::TestModelDeploymentMonitoring::test_mdm_two_models_one_valid_config
[gw5] [ 95%] FAILED tests/system/aiplatform/test_model_monitoring.py::TestModelDeploymentMonitoring::test_mdm_two_models_one_valid_config
tests/system/aiplatform/test_model_monitoring.py::TestModelDeploymentMonitoring::test_mdm_pause_and_update_config
[gw5] [ 96%] SKIPPED tests/system/aiplatform/test_model_monitoring.py::TestModelDeploymentMonitoring::test_mdm_pause_and_update_config
tests/system/aiplatform/test_model_monitoring.py::TestModelDeploymentMonitoring::test_mdm_two_models_two_valid_configs
[gw5] [ 96%] FAILED tests/system/aiplatform/test_model_monitoring.py::TestModelDeploymentMonitoring::test_mdm_two_models_two_valid_configs
tests/system/aiplatform/test_model_monitoring.py::TestModelDeploymentMonitoring::test_mdm_invalid_config_incorrect_model_id
[gw5] [ 97%] PASSED tests/system/aiplatform/test_model_monitoring.py::TestModelDeploymentMonitoring::test_mdm_invalid_config_incorrect_model_id
tests/system/aiplatform/test_model_monitoring.py::TestModelDeploymentMonitoring::test_mdm_invalid_config_xai
[gw5] [ 97%] PASSED tests/system/aiplatform/test_model_monitoring.py::TestModelDeploymentMonitoring::test_mdm_invalid_config_xai
tests/system/aiplatform/test_model_monitoring.py::TestModelDeploymentMonitoring::test_mdm_two_models_invalid_configs_xai
[gw5] [ 97%] PASSED tests/system/aiplatform/test_model_monitoring.py::TestModelDeploymentMonitoring::test_mdm_two_models_invalid_configs_xai
tests/system/aiplatform/test_model_monitoring.py::TestModelDeploymentMonitoring::test_mdm_notification_channel_alert_config
[gw5] [ 98%] FAILED tests/system/aiplatform/test_model_monitoring.py::TestModelDeploymentMonitoring::test_mdm_notification_channel_alert_config
[gw6] [ 98%] PASSED tests/system/aiplatform/test_model_upload.py::TestModelUploadAndUpdate::test_upload_and_deploy_xgboost_model
[gw9] [ 99%] FAILED tests/system/aiplatform/test_persistent_resource.py::TestPersistentResource::test_create_persistent_resource
[gw15] [ 99%] PASSED tests/system/aiplatform/test_matching_engine_index.py::TestMatchingEngine::test_matching_engine_stream_index
tests/system/aiplatform/test_pipeline_job_schedule.py::TestPipelineJobSchedule::test_create_get_pause_resume_update_list
[gw15] [100%] PASSED tests/system/aiplatform/test_pipeline_job_schedule.py::TestPipelineJobSchedule::test_create_get_pause_resume_update_list

=================================== FAILURES ===================================
____________ TestAutologging.test_autologging_with_autorun_creation ____________
[gw0] linux -- Python 3.10.15 /tmpfs/src/github/python-aiplatform/.nox/system-3-10/bin/python

self = Index(['experiment_name', 'run_name', 'run_type', 'state', 'param.copy_X',
'param.fit_intercept', 'param.positi...ot_mean_squared_error', 'metric.training_r2_score',
'metric.training_mean_squared_error'],
dtype='object')
key = 'metric.training_mae'

def get_loc(self, key):
"""
Get integer location, slice or boolean mask for requested label.

Parameters
----------
key : label

Returns
-------
int if unique index, slice if monotonic index, else mask

Examples
--------
>>> unique_index = pd.Index(list('abc'))
>>> unique_index.get_loc('b')
1

>>> monotonic_index = pd.Index(list('abbc'))
>>> monotonic_index.get_loc('b')
slice(1, 3, None)

>>> non_monotonic_index = pd.Index(list('abcb'))
>>> non_monotonic_index.get_loc('b')
array([False, True, False, True])
"""
casted_key = self._maybe_cast_indexer(key)
try:
> return self._engine.get_loc(casted_key)

.nox/system-3-10/lib/python3.10/site-packages/pandas/core/indexes/base.py:3805:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
index.pyx:167: in pandas._libs.index.IndexEngine.get_loc
???
index.pyx:196: in pandas._libs.index.IndexEngine.get_loc
???
pandas/_libs/hashtable_class_helper.pxi:7081: in pandas._libs.hashtable.PyObjectHashTable.get_item
???
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

> ???
E KeyError: 'metric.training_mae'

pandas/_libs/hashtable_class_helper.pxi:7089: KeyError

The above exception was the direct cause of the following exception:

self =
shared_state = {'bucket': , 'resources': [}

def test_autologging_with_autorun_creation(self, shared_state):

aiplatform.init(
project=e2e_base._PROJECT,
location=e2e_base._LOCATION,
experiment=self._experiment_autocreate_scikit,
experiment_tensorboard=self._backing_tensorboard,
)

shared_state["resources"] = [self._backing_tensorboard]

shared_state["resources"].append(
aiplatform.metadata.metadata._experiment_tracker.experiment
)

aiplatform.autolog()

build_and_train_test_scikit_model()

# Confirm sklearn run, params, and metrics exist
experiment_df_scikit = aiplatform.get_experiment_df()
assert experiment_df_scikit["run_name"][0].startswith("sklearn-")
assert experiment_df_scikit["param.fit_intercept"][0] == "True"
> assert experiment_df_scikit["metric.training_mae"][0] > 0

tests/system/aiplatform/test_autologging.py:162:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
.nox/system-3-10/lib/python3.10/site-packages/pandas/core/frame.py:4102: in __getitem__
indexer = self.columns.get_loc(key)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = Index(['experiment_name', 'run_name', 'run_type', 'state', 'param.copy_X',
'param.fit_intercept', 'param.positi...ot_mean_squared_error', 'metric.training_r2_score',
'metric.training_mean_squared_error'],
dtype='object')
key = 'metric.training_mae'

def get_loc(self, key):
"""
Get integer location, slice or boolean mask for requested label.

Parameters
----------
key : label

Returns
-------
int if unique index, slice if monotonic index, else mask

Examples
--------
>>> unique_index = pd.Index(list('abc'))
>>> unique_index.get_loc('b')
1

>>> monotonic_index = pd.Index(list('abbc'))
>>> monotonic_index.get_loc('b')
slice(1, 3, None)

>>> non_monotonic_index = pd.Index(list('abcb'))
>>> non_monotonic_index.get_loc('b')
array([False, True, False, True])
"""
casted_key = self._maybe_cast_indexer(key)
try:
return self._engine.get_loc(casted_key)
except KeyError as err:
if isinstance(casted_key, slice) or (
isinstance(casted_key, abc.Iterable)
and any(isinstance(x, slice) for x in casted_key)
):
raise InvalidIndexError(key)
> raise KeyError(key) from err
E KeyError: 'metric.training_mae'

.nox/system-3-10/lib/python3.10/site-packages/pandas/core/indexes/base.py:3812: KeyError
------------------------------ Captured log setup ------------------------------
INFO google.cloud.aiplatform.tensorboard.tensorboard_resource:base.py:85 Creating Tensorboard
INFO google.cloud.aiplatform.tensorboard.tensorboard_resource:base.py:88 Create Tensorboard backing LRO: projects/580378083368/locations/us-central1/tensorboards/4541400837133434880/operations/7542184020289781760
INFO google.cloud.aiplatform.tensorboard.tensorboard_resource:base.py:113 Tensorboard created. Resource name: projects/580378083368/locations/us-central1/tensorboards/4541400837133434880
INFO google.cloud.aiplatform.tensorboard.tensorboard_resource:base.py:114 To use this Tensorboard in another session:
INFO google.cloud.aiplatform.tensorboard.tensorboard_resource:base.py:115 tb = aiplatform.Tensorboard('projects/580378083368/locations/us-central1/tensorboards/4541400837133434880')
----------------------------- Captured stdout call -----------------------------


------------------------------ Captured log call -------------------------------
INFO google.cloud.aiplatform.metadata.experiment_resources:experiment_resources.py:797 Associating projects/580378083368/locations/us-central1/metadataStores/default/contexts/tmpvrtxsdk-e2e--c7dab4a0-de39-4dbb-8a3f-11b958112f21-sklearn-2024-11-19-23-14-34-fce04 to Experiment: tmpvrtxsdk-e2e--c7dab4a0-de39-4dbb-8a3f-11b958112f21
___________ TestPredictionCpr.test_build_cpr_model_upload_and_deploy ___________
[gw13] linux -- Python 3.10.15 /tmpfs/src/github/python-aiplatform/.nox/system-3-10/bin/python

self =
shared_state = {}
caplog = <_pytest.logging.LogCaptureFixture object at 0x7f68942e5cc0>

def test_build_cpr_model_upload_and_deploy(self, shared_state, caplog):
"""Creates a CPR model from custom predictor, uploads it and deploys."""

caplog.set_level(logging.INFO)

aiplatform.init(project=e2e_base._PROJECT, location=e2e_base._LOCATION)

local_model = LocalModel.build_cpr_model(
_USER_CODE_DIR,
_IMAGE_URI,
predictor=SklearnPredictor,
requirements_path=os.path.join(_USER_CODE_DIR, _REQUIREMENTS_FILE),
)

with local_model.deploy_to_local_endpoint(
artifact_uri=_LOCAL_MODEL_DIR,
) as local_endpoint:
local_predict_response = local_endpoint.predict(
request=f'{{"instances": {_PREDICTION_INPUT}}}',
headers={"Content-Type": "application/json"},
)
assert len(json.loads(local_predict_response.content)["predictions"]) == 1

interactive_local_endpoint = local_model.deploy_to_local_endpoint(
artifact_uri=_LOCAL_MODEL_DIR,
)
interactive_local_endpoint.serve()
interactive_local_predict_response = interactive_local_endpoint.predict(
request=f'{{"instances": {_PREDICTION_INPUT}}}',
headers={"Content-Type": "application/json"},
)
interactive_local_endpoint.stop()
assert (
len(json.loads(interactive_local_predict_response.content)["predictions"])
== 1
)

# Configure docker.
logging.info(
subprocess.run(["gcloud", "auth", "configure-docker"], capture_output=True)
)

> local_model.push_image()

tests/system/aiplatform/test_prediction_cpr.py:94:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
google/cloud/aiplatform/prediction/local_model.py:612: in push_image
errors.raise_docker_error_with_command(command, return_code)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

command = ['docker', 'push', 'gcr.io/ucaip-sample-tests/prediction-cpr/sklearn:20241119_231419']
return_code = 1

def raise_docker_error_with_command(command: List[str], return_code: int) -> NoReturn:
"""Raises DockerError with the given command and return code.

Args:
command (List(str)):
Required. The docker command that fails.
return_code (int):
Required. The return code from the command.

Raises:
DockerError which error message populated by the given command and return code.
"""
error_msg = textwrap.dedent(
"""
Docker failed with error code {code}.
Command: {cmd}
""".format(
code=return_code, cmd=" ".join(command)
)
)
> raise DockerError(error_msg, command, return_code)
E google.cloud.aiplatform.docker_utils.errors.DockerError: ('\nDocker failed with error code 1.\nCommand: docker push gcr.io/ucaip-sample-tests/prediction-cpr/sklearn:20241119_231419\n', ['docker', 'push', 'gcr.io/ucaip-sample-tests/prediction-cpr/sklearn:20241119_231419'], 1)

google/cloud/aiplatform/docker_utils/errors.py:60: DockerError
------------------------------ Captured log call -------------------------------
INFO google.cloud.aiplatform.docker_utils.build:build.py:531 Running command: docker build -t gcr.io/ucaip-sample-tests/prediction-cpr/sklearn:20241119_231419 --rm -f- /tmpfs/src/github/python-aiplatform/tests/system/aiplatform/test_resources/cpr_user_code
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 DEPRECATED: The legacy builder is deprecated and will be removed in a future release.

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Install the buildx component to build images with BuildKit:

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 https://docs.docker.com/go/buildx/

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Sending build context to Docker daemon 11.31kB
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Step 1/14 : FROM python:3.10

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 3.10: Pulling from library/python

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 b2b31b28ee3c: Pulling fs layer

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 c3cc7b6f0473: Pulling fs layer

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 2112e5e7c3ff: Pulling fs layer

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 af247aac0764: Pulling fs layer

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 920ce5d9169b: Pulling fs layer

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 b54a94f4cba6: Pulling fs layer

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 e06c8c5ca725: Pulling fs layer

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 af247aac0764: Waiting

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 920ce5d9169b: Waiting

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 b54a94f4cba6: Waiting

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 e06c8c5ca725: Waiting

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 c3cc7b6f0473: Verifying Checksum

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 c3cc7b6f0473: Download complete

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 b2b31b28ee3c: Verifying Checksum

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 b2b31b28ee3c: Download complete

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 2112e5e7c3ff: Verifying Checksum

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 2112e5e7c3ff: Download complete

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 920ce5d9169b: Verifying Checksum

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 920ce5d9169b: Download complete

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 e06c8c5ca725: Verifying Checksum

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 e06c8c5ca725: Download complete

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 b54a94f4cba6: Verifying Checksum

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 b54a94f4cba6: Download complete

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 af247aac0764: Verifying Checksum

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 af247aac0764: Download complete

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 b2b31b28ee3c: Pull complete

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 c3cc7b6f0473: Pull complete

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 2112e5e7c3ff: Pull complete

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 af247aac0764: Pull complete

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 920ce5d9169b: Pull complete

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 b54a94f4cba6: Pull complete

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 e06c8c5ca725: Pull complete

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Digest: sha256:941b0bfddbf17d809fd1f457acbf55dfca014e3e0e3d592b1c9070491681bc02

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Status: Downloaded newer image for python:3.10

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> 0a9be5c27d86

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Step 2/14 : ENV PYTHONDONTWRITEBYTECODE=1

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> Running in a0e227fc1410

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Removing intermediate container a0e227fc1410

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> 265c3983eaa4

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Step 3/14 : EXPOSE 8080

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> Running in f82b1a7f11b3

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Removing intermediate container f82b1a7f11b3

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> 25fa9aa471df

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Step 4/14 : ENTRYPOINT ["python", "-m", "google.cloud.aiplatform.prediction.model_server"]

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> Running in 75595ac9c56e

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Removing intermediate container 75595ac9c56e

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> 245febd7e670

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Step 5/14 : RUN mkdir -m 777 -p /usr/app /home

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> Running in f80beb524352

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Removing intermediate container f80beb524352

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> 47aeac5aa832

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Step 6/14 : WORKDIR /usr/app

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> Running in 172d75229b82

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Removing intermediate container 172d75229b82

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> 3bc10ab53340

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Step 7/14 : ENV HOME=/home

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> Running in 63571cda8417

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Removing intermediate container 63571cda8417

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> 5f26c03f3f71

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Step 8/14 : RUN pip install --no-cache-dir --force-reinstall 'google-cloud-aiplatform[prediction]>=1.27.0'

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> Running in e0eabf991335

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting google-cloud-aiplatform[prediction]>=1.27.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading google_cloud_aiplatform-1.73.0-py2.py3-none-any.whl (6.3 MB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 6.3/6.3 MB 25.1 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting google-auth<3.0.0dev,>=2.14.1

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading google_auth-2.36.0-py2.py3-none-any.whl (209 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 209.5/209.5 kB 185.4 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting docstring-parser<1

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading docstring_parser-0.16-py3-none-any.whl (36 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting google-api-core[grpc]!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.4.*,!=2.5.*,!=2.6.*,!=2.7.*,<3.0.0dev,>=1.34.1

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading google_api_core-2.23.0-py3-none-any.whl (156 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 156.6/156.6 kB 185.9 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting protobuf!=4.21.0,!=4.21.1,!=4.21.2,!=4.21.3,!=4.21.4,!=4.21.5,<6.0.0dev,>=3.20.2

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading protobuf-5.28.3-cp38-abi3-manylinux2014_x86_64.whl (316 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 316.6/316.6 kB 124.5 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting pydantic<3

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading pydantic-2.9.2-py3-none-any.whl (434 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 434.9/434.9 kB 65.8 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting shapely<3.0.0dev

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading shapely-2.0.6-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.5 MB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.5/2.5 MB 57.6 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting google-cloud-storage<3.0.0dev,>=1.32.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading google_cloud_storage-2.18.2-py2.py3-none-any.whl (130 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 130.5/130.5 kB 190.6 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting google-cloud-bigquery!=3.20.0,<4.0.0dev,>=1.15.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading google_cloud_bigquery-3.27.0-py2.py3-none-any.whl (240 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 240.1/240.1 kB 216.3 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting google-cloud-resource-manager<3.0.0dev,>=1.3.3

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading google_cloud_resource_manager-1.13.1-py2.py3-none-any.whl (358 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 358.6/358.6 kB 178.3 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting proto-plus<2.0.0dev,>=1.22.3

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading proto_plus-1.25.0-py3-none-any.whl (50 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 50.1/50.1 kB 167.2 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting packaging>=14.3

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading packaging-24.2-py3-none-any.whl (65 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 65.5/65.5 kB 176.4 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting starlette>=0.17.1

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading starlette-0.41.3-py3-none-any.whl (73 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 73.2/73.2 kB 185.8 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting httpx<0.25.0,>=0.23.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading httpx-0.24.1-py3-none-any.whl (75 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 75.4/75.4 kB 187.0 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting docker>=5.0.3

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading docker-7.1.0-py3-none-any.whl (147 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 147.8/147.8 kB 200.5 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting uvicorn[standard]>=0.16.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading uvicorn-0.32.0-py3-none-any.whl (63 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 63.7/63.7 kB 181.7 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting fastapi<=0.114.0,>=0.71.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading fastapi-0.114.0-py3-none-any.whl (94 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 94.0/94.0 kB 181.4 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting urllib3>=1.26.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading urllib3-2.2.3-py3-none-any.whl (126 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 126.3/126.3 kB 197.7 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting requests>=2.26.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading requests-2.32.3-py3-none-any.whl (64 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 64.9/64.9 kB 197.8 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting typing-extensions>=4.8.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading typing_extensions-4.12.2-py3-none-any.whl (37 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting starlette>=0.17.1

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading starlette-0.38.6-py3-none-any.whl (71 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 71.5/71.5 kB 186.6 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting googleapis-common-protos<2.0.dev0,>=1.56.2

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading googleapis_common_protos-1.66.0-py2.py3-none-any.whl (221 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 221.7/221.7 kB 224.8 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting grpcio<2.0dev,>=1.33.2

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading grpcio-1.68.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (5.9 MB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 5.9/5.9 MB 85.1 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting grpcio-status<2.0.dev0,>=1.33.2

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading grpcio_status-1.68.0-py3-none-any.whl (14 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting rsa<5,>=3.1.4

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading rsa-4.9-py3-none-any.whl (34 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting pyasn1-modules>=0.2.1

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading pyasn1_modules-0.4.1-py3-none-any.whl (181 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 181.5/181.5 kB 199.2 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting cachetools<6.0,>=2.0.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading cachetools-5.5.0-py3-none-any.whl (9.5 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting python-dateutil<3.0dev,>=2.7.3

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading python_dateutil-2.9.0.post0-py2.py3-none-any.whl (229 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 229.9/229.9 kB 212.4 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting google-resumable-media<3.0dev,>=2.0.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading google_resumable_media-2.7.2-py2.py3-none-any.whl (81 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 81.3/81.3 kB 174.8 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting google-cloud-core<3.0.0dev,>=2.4.1

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading google_cloud_core-2.4.1-py2.py3-none-any.whl (29 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting grpc-google-iam-v1<1.0.0dev,>=0.12.4

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading grpc_google_iam_v1-0.13.1-py2.py3-none-any.whl (24 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting google-crc32c<2.0dev,>=1.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading google_crc32c-1.6.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (37 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting httpcore<0.18.0,>=0.15.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading httpcore-0.17.3-py3-none-any.whl (74 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 74.5/74.5 kB 187.1 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting idna

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading idna-3.10-py3-none-any.whl (70 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 70.4/70.4 kB 152.5 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting sniffio

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading sniffio-1.3.1-py3-none-any.whl (10 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting certifi

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading certifi-2024.8.30-py3-none-any.whl (167 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 167.3/167.3 kB 206.3 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting pydantic-core==2.23.4

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading pydantic_core-2.23.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.1 MB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.1/2.1 MB 116.0 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting annotated-types>=0.6.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading annotated_types-0.7.0-py3-none-any.whl (13 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting numpy<3,>=1.14

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading numpy-2.1.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (16.3 MB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 16.3/16.3 MB 176.4 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting anyio<5,>=3.4.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading anyio-4.6.2.post1-py3-none-any.whl (90 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 90.4/90.4 kB 192.5 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting click>=7.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading click-8.1.7-py3-none-any.whl (97 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 97.9/97.9 kB 157.6 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting h11>=0.8

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading h11-0.14.0-py3-none-any.whl (58 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 58.3/58.3 kB 183.1 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting uvloop!=0.15.0,!=0.15.1,>=0.14.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading uvloop-0.21.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (3.8 MB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.8/3.8 MB 144.2 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting python-dotenv>=0.13

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading python_dotenv-1.0.1-py3-none-any.whl (19 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting httptools>=0.5.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading httptools-0.6.4-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (442 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 442.1/442.1 kB 173.3 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting watchfiles>=0.13

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading watchfiles-0.24.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (425 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 425.7/425.7 kB 225.4 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting websockets>=10.4

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading websockets-14.1-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (168 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 168.2/168.2 kB 195.8 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting pyyaml>=5.1

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading PyYAML-6.0.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (751 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 751.2/751.2 kB 227.1 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting exceptiongroup>=1.0.2

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading exceptiongroup-1.2.2-py3-none-any.whl (16 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting pyasn1<0.7.0,>=0.4.6

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading pyasn1-0.6.1-py3-none-any.whl (83 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 83.1/83.1 kB 183.8 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting six>=1.5

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading six-1.16.0-py2.py3-none-any.whl (11 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting charset-normalizer<4,>=2

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading charset_normalizer-3.4.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (144 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 144.8/144.8 kB 184.6 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Installing collected packages: websockets, uvloop, urllib3, typing-extensions, sniffio, six, pyyaml, python-dotenv, pyasn1, protobuf, packaging, numpy, idna, httptools, h11, grpcio, google-crc32c, exceptiongroup, docstring-parser, click, charset-normalizer, certifi, cachetools, annotated-types, uvicorn, shapely, rsa, requests, python-dateutil, pydantic-core, pyasn1-modules, proto-plus, googleapis-common-protos, google-resumable-media, anyio, watchfiles, starlette, pydantic, httpcore, grpcio-status, google-auth, docker, httpx, grpc-google-iam-v1, google-api-core, fastapi, google-cloud-core, google-cloud-storage, google-cloud-resource-manager, google-cloud-bigquery, google-cloud-aiplatform

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully installed annotated-types-0.7.0 anyio-4.6.2.post1 cachetools-5.5.0 certifi-2024.8.30 charset-normalizer-3.4.0 click-8.1.7 docker-7.1.0 docstring-parser-0.16 exceptiongroup-1.2.2 fastapi-0.114.0 google-api-core-2.23.0 google-auth-2.36.0 google-cloud-aiplatform-1.73.0 google-cloud-bigquery-3.27.0 google-cloud-core-2.4.1 google-cloud-resource-manager-1.13.1 google-cloud-storage-2.18.2 google-crc32c-1.6.0 google-resumable-media-2.7.2 googleapis-common-protos-1.66.0 grpc-google-iam-v1-0.13.1 grpcio-1.68.0 grpcio-status-1.68.0 h11-0.14.0 httpcore-0.17.3 httptools-0.6.4 httpx-0.24.1 idna-3.10 numpy-2.1.3 packaging-24.2 proto-plus-1.25.0 protobuf-5.28.3 pyasn1-0.6.1 pyasn1-modules-0.4.1 pydantic-2.9.2 pydantic-core-2.23.4 python-dateutil-2.9.0.post0 python-dotenv-1.0.1 pyyaml-6.0.2 requests-2.32.3 rsa-4.9 shapely-2.0.6 six-1.16.0 sniffio-1.3.1 starlette-0.38.6 typing-extensions-4.12.2 urllib3-2.2.3 uvicorn-0.32.0 uvloop-0.21.0 watchfiles-0.24.0 websockets-14.1

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60


INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 [notice] A new release of pip is available: 23.0.1 -> 24.3.1

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 [notice] To update, run: pip install --upgrade pip

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60
Removing intermediate container e0eabf991335

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> eafa47d1fe17

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Step 9/14 : ENV HANDLER_MODULE=google.cloud.aiplatform.prediction.handler

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> Running in 4ac368cc46b8

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Removing intermediate container 4ac368cc46b8

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> 7bd899e73282

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Step 10/14 : ENV HANDLER_CLASS=PredictionHandler

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> Running in f25371761366

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Removing intermediate container f25371761366

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> c1a05d590570

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Step 11/14 : ENV PREDICTOR_MODULE=predictor

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> Running in 78f2ca26ec3d

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Removing intermediate container 78f2ca26ec3d

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> b3506179ab09

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Step 12/14 : ENV PREDICTOR_CLASS=SklearnPredictor

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> Running in 9baf64a3625e

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Removing intermediate container 9baf64a3625e

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> fa3a5fd68ead

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Step 13/14 : COPY [".", "."]

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> 6e50e387706a

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Step 14/14 : RUN pip install --no-cache-dir --force-reinstall -r requirements.txt

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> Running in 243a302c23a1

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting scikit-learn

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading scikit_learn-1.5.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (13.3 MB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 13.3/13.3 MB 50.7 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting google-cloud-aiplatform[prediction]

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading google_cloud_aiplatform-1.73.0-py2.py3-none-any.whl (6.3 MB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 6.3/6.3 MB 108.0 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting scipy>=1.6.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading scipy-1.14.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (41.2 MB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 41.2/41.2 MB 165.1 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting numpy>=1.19.5

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading numpy-2.1.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (16.3 MB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 16.3/16.3 MB 129.6 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting joblib>=1.2.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading joblib-1.4.2-py3-none-any.whl (301 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 301.8/301.8 kB 206.6 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting threadpoolctl>=3.1.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading threadpoolctl-3.5.0-py3-none-any.whl (18 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting google-api-core[grpc]!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.4.*,!=2.5.*,!=2.6.*,!=2.7.*,<3.0.0dev,>=1.34.1

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading google_api_core-2.23.0-py3-none-any.whl (156 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 156.6/156.6 kB 214.1 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting proto-plus<2.0.0dev,>=1.22.3

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading proto_plus-1.25.0-py3-none-any.whl (50 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 50.1/50.1 kB 177.5 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting google-cloud-bigquery!=3.20.0,<4.0.0dev,>=1.15.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading google_cloud_bigquery-3.27.0-py2.py3-none-any.whl (240 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 240.1/240.1 kB 188.9 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting shapely<3.0.0dev

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading shapely-2.0.6-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.5 MB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.5/2.5 MB 161.4 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting google-auth<3.0.0dev,>=2.14.1

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading google_auth-2.36.0-py2.py3-none-any.whl (209 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 209.5/209.5 kB 204.1 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting packaging>=14.3

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading packaging-24.2-py3-none-any.whl (65 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 65.5/65.5 kB 159.2 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting google-cloud-resource-manager<3.0.0dev,>=1.3.3

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading google_cloud_resource_manager-1.13.1-py2.py3-none-any.whl (358 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 358.6/358.6 kB 187.8 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting protobuf!=4.21.0,!=4.21.1,!=4.21.2,!=4.21.3,!=4.21.4,!=4.21.5,<6.0.0dev,>=3.20.2

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading protobuf-5.28.3-cp38-abi3-manylinux2014_x86_64.whl (316 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 316.6/316.6 kB 211.3 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting pydantic<3

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading pydantic-2.9.2-py3-none-any.whl (434 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 434.9/434.9 kB 51.2 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting docstring-parser<1

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading docstring_parser-0.16-py3-none-any.whl (36 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting google-cloud-storage<3.0.0dev,>=1.32.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading google_cloud_storage-2.18.2-py2.py3-none-any.whl (130 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 130.5/130.5 kB 184.7 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting docker>=5.0.3

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading docker-7.1.0-py3-none-any.whl (147 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 147.8/147.8 kB 196.1 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting fastapi<=0.114.0,>=0.71.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading fastapi-0.114.0-py3-none-any.whl (94 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 94.0/94.0 kB 182.6 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting httpx<0.25.0,>=0.23.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading httpx-0.24.1-py3-none-any.whl (75 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 75.4/75.4 kB 182.1 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting uvicorn[standard]>=0.16.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading uvicorn-0.32.0-py3-none-any.whl (63 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 63.7/63.7 kB 170.5 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting starlette>=0.17.1

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading starlette-0.41.3-py3-none-any.whl (73 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 73.2/73.2 kB 150.5 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting requests>=2.26.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading requests-2.32.3-py3-none-any.whl (64 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 64.9/64.9 kB 146.8 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting urllib3>=1.26.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading urllib3-2.2.3-py3-none-any.whl (126 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 126.3/126.3 kB 202.3 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting starlette>=0.17.1

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading starlette-0.38.6-py3-none-any.whl (71 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 71.5/71.5 kB 153.9 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting typing-extensions>=4.8.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading typing_extensions-4.12.2-py3-none-any.whl (37 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting googleapis-common-protos<2.0.dev0,>=1.56.2

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading googleapis_common_protos-1.66.0-py2.py3-none-any.whl (221 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 221.7/221.7 kB 209.9 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting grpcio<2.0dev,>=1.33.2

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading grpcio-1.68.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (5.9 MB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 5.9/5.9 MB 169.9 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting grpcio-status<2.0.dev0,>=1.33.2

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading grpcio_status-1.68.0-py3-none-any.whl (14 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting pyasn1-modules>=0.2.1

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading pyasn1_modules-0.4.1-py3-none-any.whl (181 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 181.5/181.5 kB 206.8 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting rsa<5,>=3.1.4

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading rsa-4.9-py3-none-any.whl (34 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting cachetools<6.0,>=2.0.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading cachetools-5.5.0-py3-none-any.whl (9.5 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting google-cloud-core<3.0.0dev,>=2.4.1

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading google_cloud_core-2.4.1-py2.py3-none-any.whl (29 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting google-resumable-media<3.0dev,>=2.0.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading google_resumable_media-2.7.2-py2.py3-none-any.whl (81 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 81.3/81.3 kB 186.6 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting python-dateutil<3.0dev,>=2.7.3

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading python_dateutil-2.9.0.post0-py2.py3-none-any.whl (229 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 229.9/229.9 kB 210.9 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting grpc-google-iam-v1<1.0.0dev,>=0.12.4

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading grpc_google_iam_v1-0.13.1-py2.py3-none-any.whl (24 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting google-crc32c<2.0dev,>=1.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading google_crc32c-1.6.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (37 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting idna

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading idna-3.10-py3-none-any.whl (70 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 70.4/70.4 kB 132.8 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting certifi

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading certifi-2024.8.30-py3-none-any.whl (167 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 167.3/167.3 kB 206.9 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting sniffio

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading sniffio-1.3.1-py3-none-any.whl (10 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting httpcore<0.18.0,>=0.15.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading httpcore-0.17.3-py3-none-any.whl (74 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 74.5/74.5 kB 181.8 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting annotated-types>=0.6.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading annotated_types-0.7.0-py3-none-any.whl (13 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting pydantic-core==2.23.4

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading pydantic_core-2.23.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.1 MB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.1/2.1 MB 192.1 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting anyio<5,>=3.4.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading anyio-4.6.2.post1-py3-none-any.whl (90 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 90.4/90.4 kB 189.1 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting click>=7.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading click-8.1.7-py3-none-any.whl (97 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 97.9/97.9 kB 189.4 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting h11>=0.8

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading h11-0.14.0-py3-none-any.whl (58 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 58.3/58.3 kB 191.4 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting httptools>=0.5.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading httptools-0.6.4-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (442 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 442.1/442.1 kB 173.8 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting uvloop!=0.15.0,!=0.15.1,>=0.14.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading uvloop-0.21.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (3.8 MB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.8/3.8 MB 181.0 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting python-dotenv>=0.13

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading python_dotenv-1.0.1-py3-none-any.whl (19 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting websockets>=10.4

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading websockets-14.1-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (168 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 168.2/168.2 kB 211.1 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting watchfiles>=0.13

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading watchfiles-0.24.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (425 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 425.7/425.7 kB 217.3 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting pyyaml>=5.1

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading PyYAML-6.0.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (751 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 751.2/751.2 kB 148.5 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting exceptiongroup>=1.0.2

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading exceptiongroup-1.2.2-py3-none-any.whl (16 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting pyasn1<0.7.0,>=0.4.6

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading pyasn1-0.6.1-py3-none-any.whl (83 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 83.1/83.1 kB 185.9 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting six>=1.5

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading six-1.16.0-py2.py3-none-any.whl (11 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Collecting charset-normalizer<4,>=2

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Downloading charset_normalizer-3.4.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (144 kB)

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 144.8/144.8 kB 202.5 MB/s eta 0:00:00

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Installing collected packages: websockets, uvloop, urllib3, typing-extensions, threadpoolctl, sniffio, six, pyyaml, python-dotenv, pyasn1, protobuf, packaging, numpy, joblib, idna, httptools, h11, grpcio, google-crc32c, exceptiongroup, docstring-parser, click, charset-normalizer, certifi, cachetools, annotated-types, uvicorn, shapely, scipy, rsa, requests, python-dateutil, pydantic-core, pyasn1-modules, proto-plus, googleapis-common-protos, google-resumable-media, anyio, watchfiles, starlette, scikit-learn, pydantic, httpcore, grpcio-status, google-auth, docker, httpx, grpc-google-iam-v1, google-api-core, fastapi, google-cloud-core, google-cloud-storage, google-cloud-resource-manager, google-cloud-bigquery, google-cloud-aiplatform

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: websockets

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: websockets 14.1

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling websockets-14.1:

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled websockets-14.1

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: uvloop

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: uvloop 0.21.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling uvloop-0.21.0:

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled uvloop-0.21.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: urllib3

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: urllib3 2.2.3

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling urllib3-2.2.3:

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled urllib3-2.2.3

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: typing-extensions

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: typing_extensions 4.12.2

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling typing_extensions-4.12.2:

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled typing_extensions-4.12.2

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: sniffio

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: sniffio 1.3.1

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling sniffio-1.3.1:

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled sniffio-1.3.1

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: six

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: six 1.16.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling six-1.16.0:

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled six-1.16.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: pyyaml

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: PyYAML 6.0.2

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling PyYAML-6.0.2:

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled PyYAML-6.0.2

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: python-dotenv

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: python-dotenv 1.0.1

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling python-dotenv-1.0.1:

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled python-dotenv-1.0.1

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: pyasn1

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: pyasn1 0.6.1

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling pyasn1-0.6.1:

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled pyasn1-0.6.1

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: protobuf

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: protobuf 5.28.3

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling protobuf-5.28.3:

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled protobuf-5.28.3

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: packaging

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: packaging 24.2

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling packaging-24.2:

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled packaging-24.2

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: numpy

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: numpy 2.1.3

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling numpy-2.1.3:

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled numpy-2.1.3

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: idna

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: idna 3.10

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling idna-3.10:

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled idna-3.10

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: httptools

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: httptools 0.6.4

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling httptools-0.6.4:

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled httptools-0.6.4

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: h11

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: h11 0.14.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling h11-0.14.0:

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled h11-0.14.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: grpcio

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: grpcio 1.68.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling grpcio-1.68.0:

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled grpcio-1.68.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: google-crc32c

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: google-crc32c 1.6.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling google-crc32c-1.6.0:

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled google-crc32c-1.6.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: exceptiongroup

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: exceptiongroup 1.2.2

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling exceptiongroup-1.2.2:

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled exceptiongroup-1.2.2

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: docstring-parser

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: docstring_parser 0.16

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling docstring_parser-0.16:

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled docstring_parser-0.16

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: click

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: click 8.1.7

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling click-8.1.7:

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled click-8.1.7

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: charset-normalizer

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: charset-normalizer 3.4.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling charset-normalizer-3.4.0:

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled charset-normalizer-3.4.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: certifi

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: certifi 2024.8.30

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling certifi-2024.8.30:

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled certifi-2024.8.30

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: cachetools

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: cachetools 5.5.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling cachetools-5.5.0:

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled cachetools-5.5.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: annotated-types

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: annotated-types 0.7.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling annotated-types-0.7.0:

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled annotated-types-0.7.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: uvicorn

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: uvicorn 0.32.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling uvicorn-0.32.0:

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled uvicorn-0.32.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: shapely

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: shapely 2.0.6

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling shapely-2.0.6:

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled shapely-2.0.6

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: rsa

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: rsa 4.9

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling rsa-4.9:

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled rsa-4.9

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: requests

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: requests 2.32.3

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling requests-2.32.3:

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled requests-2.32.3

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: python-dateutil

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: python-dateutil 2.9.0.post0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling python-dateutil-2.9.0.post0:

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled python-dateutil-2.9.0.post0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: pydantic-core

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: pydantic_core 2.23.4

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling pydantic_core-2.23.4:

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled pydantic_core-2.23.4

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: pyasn1-modules

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: pyasn1_modules 0.4.1

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling pyasn1_modules-0.4.1:

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled pyasn1_modules-0.4.1

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: proto-plus

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: proto-plus 1.25.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling proto-plus-1.25.0:

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled proto-plus-1.25.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: googleapis-common-protos

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: googleapis-common-protos 1.66.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling googleapis-common-protos-1.66.0:

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled googleapis-common-protos-1.66.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: google-resumable-media

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: google-resumable-media 2.7.2

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling google-resumable-media-2.7.2:

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled google-resumable-media-2.7.2

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: anyio

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: anyio 4.6.2.post1

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling anyio-4.6.2.post1:

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled anyio-4.6.2.post1

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: watchfiles

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: watchfiles 0.24.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling watchfiles-0.24.0:

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled watchfiles-0.24.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: starlette

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: starlette 0.38.6

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling starlette-0.38.6:

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled starlette-0.38.6

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: pydantic

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: pydantic 2.9.2

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling pydantic-2.9.2:

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled pydantic-2.9.2

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: httpcore

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: httpcore 0.17.3

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling httpcore-0.17.3:

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled httpcore-0.17.3

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: grpcio-status

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: grpcio-status 1.68.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling grpcio-status-1.68.0:

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled grpcio-status-1.68.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: google-auth

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: google-auth 2.36.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling google-auth-2.36.0:

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled google-auth-2.36.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: docker

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: docker 7.1.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling docker-7.1.0:

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled docker-7.1.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: httpx

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: httpx 0.24.1

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling httpx-0.24.1:

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled httpx-0.24.1

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: grpc-google-iam-v1

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: grpc-google-iam-v1 0.13.1

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling grpc-google-iam-v1-0.13.1:

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled grpc-google-iam-v1-0.13.1

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: google-api-core

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: google-api-core 2.23.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling google-api-core-2.23.0:

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled google-api-core-2.23.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: fastapi

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: fastapi 0.114.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling fastapi-0.114.0:

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled fastapi-0.114.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: google-cloud-core

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: google-cloud-core 2.4.1

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling google-cloud-core-2.4.1:

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled google-cloud-core-2.4.1

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: google-cloud-storage

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: google-cloud-storage 2.18.2

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling google-cloud-storage-2.18.2:

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled google-cloud-storage-2.18.2

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: google-cloud-resource-manager

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: google-cloud-resource-manager 1.13.1

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling google-cloud-resource-manager-1.13.1:

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled google-cloud-resource-manager-1.13.1

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: google-cloud-bigquery

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: google-cloud-bigquery 3.27.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling google-cloud-bigquery-3.27.0:

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled google-cloud-bigquery-3.27.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Attempting uninstall: google-cloud-aiplatform

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Found existing installation: google-cloud-aiplatform 1.73.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Uninstalling google-cloud-aiplatform-1.73.0:

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully uninstalled google-cloud-aiplatform-1.73.0

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully installed annotated-types-0.7.0 anyio-4.6.2.post1 cachetools-5.5.0 certifi-2024.8.30 charset-normalizer-3.4.0 click-8.1.7 docker-7.1.0 docstring-parser-0.16 exceptiongroup-1.2.2 fastapi-0.114.0 google-api-core-2.23.0 google-auth-2.36.0 google-cloud-aiplatform-1.73.0 google-cloud-bigquery-3.27.0 google-cloud-core-2.4.1 google-cloud-resource-manager-1.13.1 google-cloud-storage-2.18.2 google-crc32c-1.6.0 google-resumable-media-2.7.2 googleapis-common-protos-1.66.0 grpc-google-iam-v1-0.13.1 grpcio-1.68.0 grpcio-status-1.68.0 h11-0.14.0 httpcore-0.17.3 httptools-0.6.4 httpx-0.24.1 idna-3.10 joblib-1.4.2 numpy-2.1.3 packaging-24.2 proto-plus-1.25.0 protobuf-5.28.3 pyasn1-0.6.1 pyasn1-modules-0.4.1 pydantic-2.9.2 pydantic-core-2.23.4 python-dateutil-2.9.0.post0 python-dotenv-1.0.1 pyyaml-6.0.2 requests-2.32.3 rsa-4.9 scikit-learn-1.5.2 scipy-1.14.1 shapely-2.0.6 six-1.16.0 sniffio-1.3.1 starlette-0.38.6 threadpoolctl-3.5.0 typing-extensions-4.12.2 urllib3-2.2.3 uvicorn-0.32.0 uvloop-0.21.0 watchfiles-0.24.0 websockets-14.1

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60


INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 [notice] A new release of pip is available: 23.0.1 -> 24.3.1

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 [notice] To update, run: pip install --upgrade pip

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60
Removing intermediate container 243a302c23a1

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ---> 294a1c04fb0d

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully built 294a1c04fb0d

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 Successfully tagged gcr.io/ucaip-sample-tests/prediction-cpr/sklearn:20241119_231419

INFO google.cloud.aiplatform.prediction.local_endpoint:local_endpoint.py:237 Got the project id from the global config: ucaip-sample-tests.
INFO google.cloud.aiplatform.prediction.local_endpoint:local_endpoint.py:237 Got the project id from the global config: ucaip-sample-tests.
INFO root:test_prediction_cpr.py:90 CompletedProcess(args=['gcloud', 'auth', 'configure-docker'], returncode=0, stdout=b'', stderr=b'Adding credentials for all GCR repositories.\nWARNING: A long list of credential helpers may cause delays running \'docker build\'. We recommend passing the registry name to configure only the registry you are using.\nAfter update, the following will be written to your Docker config file located \nat [/root/.docker/config.json]:\n {\n "credHelpers": {\n "gcr.io": "gcloud",\n "us.gcr.io": "gcloud",\n "eu.gcr.io": "gcloud",\n "asia.gcr.io": "gcloud",\n "staging-k8s.gcr.io": "gcloud",\n "marketplace.gcr.io": "gcloud"\n }\n}\n\nDo you want to continue (Y/n)? \nDocker configuration file updated.\n')
INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 The push refers to repository [gcr.io/ucaip-sample-tests/prediction-cpr/sklearn]

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 741a0703f56a: Preparing

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 153466653609: Preparing

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 f20e36acc104: Preparing

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 3f2146a291fb: Preparing

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 ed9137022617: Preparing

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 49adcbed4d27: Preparing

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 611fa59a47aa: Preparing

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 96d99c63b722: Preparing

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 00547dd240c4: Preparing

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 b6ca42156b9f: Preparing

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 24b5ce0f1e07: Preparing

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 96d99c63b722: Waiting

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 00547dd240c4: Waiting

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 49adcbed4d27: Waiting

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 611fa59a47aa: Waiting

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 b6ca42156b9f: Waiting

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 24b5ce0f1e07: Waiting

INFO google.cloud.aiplatform.docker_utils.local_util:local_util.py:60 error parsing HTTP 412 response body: invalid character 'C' looking for beginning of value: "Container Registry is deprecated and shutting down, please use the auto migration tool to migrate to Artifact Registry. For more details see: https://cloud.google.com/artifact-registry/docs/transition/auto-migrate-gcr-ar"
_____ TestLanguageModels.test_code_chat_model_send_message_streaming[grpc] _____
[gw14] linux -- Python 3.10.15 /tmpfs/src/github/python-aiplatform/.nox/system-3-10/bin/python

self =
api_transport = 'grpc'

@pytest.mark.parametrize("api_transport", ["grpc", "rest"])
def test_code_chat_model_send_message_streaming(self, api_transport):
aiplatform.init(
project=e2e_base._PROJECT,
location=e2e_base._LOCATION,
api_transport=api_transport,
)

chat_model = language_models.CodeChatModel.from_pretrained("codechat-bison@001")
chat = chat_model.start_chat()

message1 = "Please help write a function to calculate the max of two numbers"
for response in chat.send_message_streaming(message1):
> assert response.text
E AssertionError: assert ''
E + where '' = MultiCandidateTextGenerationResponse(text='', _prediction_response=Prediction(predictions=[{'candidates': [{'content': '', 'author': '1'}], 'citationMetadata': [{'citations': None}], 'safetyAttributes': [{'blocked': False, 'scores': None, 'categories': None}]}], deployed_model_id='', metadata=None, model_version_id=None, model_resource_name=None, explanations=None), is_blocked=False, errors=(), safety_attributes={}, grounding_metadata=GroundingMetadata(citations=[], search_queries=[]), candidates=[TextGenerationResponse(text='', is_blocked=False, errors=(), safety_attributes={}, grounding_metadata=GroundingMetadata(citations=[], search_queries=[]))]).text

tests/system/aiplatform/test_language_models.py:559: AssertionError
________ TestJobSubmissionDashboard.test_job_submission_dashboard[2.9] _________
[gw0] linux -- Python 3.10.15 /tmpfs/src/github/python-aiplatform/.nox/system-3-10/bin/python

self =
cluster_ray_version = '2.9'

@pytest.mark.parametrize("cluster_ray_version", ["2.9", "2.33"])
def test_job_submission_dashboard(self, cluster_ray_version):
assert ray.__version__ == RAY_VERSION
aiplatform.init(project=PROJECT_ID, location="us-central1")

head_node_type = vertex_ray.Resources()
worker_node_types = [vertex_ray.Resources()]

timestamp = datetime.datetime.now().strftime("%Y-%m-%d-%H-%M-%S")

# Create cluster, get dashboard address
> cluster_resource_name = vertex_ray.create_ray_cluster(
head_node_type=head_node_type,
worker_node_types=worker_node_types,
cluster_name=f"ray-cluster-{timestamp}-test-job-submission-dashboard",
ray_version=cluster_ray_version,
)

tests/system/vertex_ray/test_job_submission_dashboard.py:49:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
google/cloud/aiplatform/vertex_ray/cluster_init.py:360: in create_ray_cluster
response = _gapic_utils.get_persistent_resource(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

persistent_resource_name = 'projects/580378083368/locations/us-central1/persistentResources/ray-cluster-2024-11-19-23-16-06-test-job-submission-dashboard'
tolerance = 1

def get_persistent_resource(
persistent_resource_name: str, tolerance: Optional[int] = 0
):
"""Get persistent resource.

Args:
persistent_resource_name:
"projects//locations//persistentResources/".
tolerance: number of attemps to get persistent resource.

Returns:
aiplatform_v1.PersistentResource if state is RUNNING.

Raises:
ValueError: Invalid cluster resource name.
RuntimeError: Service returns error.
RuntimeError: Cluster resource state is STOPPING.
RuntimeError: Cluster resource state is ERROR.
"""

client = create_persistent_resource_client()
request = GetPersistentResourceRequest(name=persistent_resource_name)

# TODO(b/277117901): Add test cases for polling and error handling
num_attempts = 0
while True:
try:
response = client.get_persistent_resource(request)
except exceptions.NotFound:
response = None
if num_attempts >= tolerance:
raise ValueError(
"[Ray on Vertex AI]: Invalid cluster_resource_name (404 not found)."
)
if response:
if response.error.message:
logging.error("[Ray on Vertex AI]: %s" % response.error.message)
> raise RuntimeError("[Ray on Vertex AI]: Cluster returned an error.")
E RuntimeError: [Ray on Vertex AI]: Cluster returned an error.

google/cloud/aiplatform/vertex_ray/util/_gapic_utils.py:115: RuntimeError
----------------------------- Captured stdout call -----------------------------
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 1; sleeping for 0:02:30 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 2; sleeping for 0:01:54.750000 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 3; sleeping for 0:01:27.783750 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 4; sleeping for 0:01:07.154569 seconds
------------------------------ Captured log call -------------------------------
ERROR root:_gapic_utils.py:114 [Ray on Vertex AI]: An internal error occurred on your cluster. Please try recreating one in a few minutes. If you still experience errors, contact Cloud AI Platform.
_ TestGenerativeModels.test_generate_content_function_calling[grpc-PROD_ENDPOINT] _
[gw8] linux -- Python 3.10.15 /tmpfs/src/github/python-aiplatform/.nox/system-3-10/bin/python

self =
api_endpoint_env_name = 'PROD_ENDPOINT'

def test_generate_content_function_calling(self, api_endpoint_env_name):
get_current_weather_func = generative_models.FunctionDeclaration(
name="get_current_weather",
description="Get the current weather in a given location",
parameters=_REQUEST_FUNCTION_PARAMETER_SCHEMA_STRUCT,
)

weather_tool = generative_models.Tool(
function_declarations=[get_current_weather_func],
)

model = generative_models.GenerativeModel(
GEMINI_MODEL_NAME,
# Specifying the tools once to avoid specifying them in every request
tools=[weather_tool],
)

# Define the user's prompt in a Content object that we can reuse in model calls
prompt = "What is the weather like in Boston?"
user_prompt_content = generative_models.Content(
role="user",
parts=[
generative_models.Part.from_text(prompt),
],
)

# Send the prompt and instruct the model to generate content using the Tool
response = model.generate_content(
user_prompt_content,
generation_config={"temperature": 0},
tools=[weather_tool],
)
response_function_call_content = response.candidates[0].content

assert (
response.candidates[0].content.parts[0].function_call.name
== "get_current_weather"
)

assert response.candidates[0].function_calls[0].args["location"]
assert len(response.candidates[0].function_calls) == 1
> assert (
response.candidates[0].function_calls[0]
== response.candidates[0].content.parts[0].function_call
)
E assert name: "get_current_weather"\nargs {\n fields {\n key: "location"\n value {\n string_value: "Boston, MA"\n }\n }\n}\n == name: "get_current_weather"\nargs {\n fields {\n key: "location"\n value {\n string_value: "Boston, MA"\n }\n }\n}\n
E + where name: "get_current_weather"\nargs {\n fields {\n key: "location"\n value {\n string_value: "Boston, MA"\n }\n }\n}\n = function_call {\n name: "get_current_weather"\n args {\n fields {\n key: "location"\n value {\n string_value: "Boston, MA"\n }\n }\n }\n}\n.function_call

tests/system/vertexai/test_generative_models.py:565: AssertionError
_ TestGenerativeModels.test_generate_content_function_calling[rest-PROD_ENDPOINT] _
[gw8] linux -- Python 3.10.15 /tmpfs/src/github/python-aiplatform/.nox/system-3-10/bin/python

self =
api_endpoint_env_name = 'PROD_ENDPOINT'

def test_generate_content_function_calling(self, api_endpoint_env_name):
get_current_weather_func = generative_models.FunctionDeclaration(
name="get_current_weather",
description="Get the current weather in a given location",
parameters=_REQUEST_FUNCTION_PARAMETER_SCHEMA_STRUCT,
)

weather_tool = generative_models.Tool(
function_declarations=[get_current_weather_func],
)

model = generative_models.GenerativeModel(
GEMINI_MODEL_NAME,
# Specifying the tools once to avoid specifying them in every request
tools=[weather_tool],
)

# Define the user's prompt in a Content object that we can reuse in model calls
prompt = "What is the weather like in Boston?"
user_prompt_content = generative_models.Content(
role="user",
parts=[
generative_models.Part.from_text(prompt),
],
)

# Send the prompt and instruct the model to generate content using the Tool
response = model.generate_content(
user_prompt_content,
generation_config={"temperature": 0},
tools=[weather_tool],
)
response_function_call_content = response.candidates[0].content

assert (
response.candidates[0].content.parts[0].function_call.name
== "get_current_weather"
)

assert response.candidates[0].function_calls[0].args["location"]
assert len(response.candidates[0].function_calls) == 1
> assert (
response.candidates[0].function_calls[0]
== response.candidates[0].content.parts[0].function_call
)
E assert name: "get_current_weather"\nargs {\n fields {\n key: "location"\n value {\n string_value: "Boston, MA"\n }\n }\n}\n == name: "get_current_weather"\nargs {\n fields {\n key: "location"\n value {\n string_value: "Boston, MA"\n }\n }\n}\n
E + where name: "get_current_weather"\nargs {\n fields {\n key: "location"\n value {\n string_value: "Boston, MA"\n }\n }\n}\n = function_call {\n name: "get_current_weather"\n args {\n fields {\n key: "location"\n value {\n string_value: "Boston, MA"\n }\n }\n }\n}\n.function_call

tests/system/vertexai/test_generative_models.py:565: AssertionError
_____________ TestClusterManagement.test_cluster_management[2.33] ______________
[gw10] linux -- Python 3.10.15 /tmpfs/src/github/python-aiplatform/.nox/system-3-10/bin/python

self =
cluster_ray_version = '2.33'

@pytest.mark.parametrize("cluster_ray_version", ["2.9", "2.33"])
def test_cluster_management(self, cluster_ray_version):
assert ray.__version__ == RAY_VERSION
aiplatform.init(project=PROJECT_ID, location="us-central1")

# CPU default cluster
head_node_type = vertex_ray.Resources()
worker_node_types = [vertex_ray.Resources()]

timestamp = datetime.datetime.now().strftime("%Y-%m-%d-%H-%M-%S")

> cluster_resource_name = vertex_ray.create_ray_cluster(
head_node_type=head_node_type,
worker_node_types=worker_node_types,
cluster_name=f"ray-cluster-{timestamp}-test-cluster-management",
ray_version=cluster_ray_version,
)

tests/system/vertex_ray/test_cluster_management.py:45:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
google/cloud/aiplatform/vertex_ray/cluster_init.py:360: in create_ray_cluster
response = _gapic_utils.get_persistent_resource(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

persistent_resource_name = 'projects/580378083368/locations/us-central1/persistentResources/ray-cluster-2024-11-19-23-33-50-test-cluster-management'
tolerance = 1

def get_persistent_resource(
persistent_resource_name: str, tolerance: Optional[int] = 0
):
"""Get persistent resource.

Args:
persistent_resource_name:
"projects//locations//persistentResources/".
tolerance: number of attemps to get persistent resource.

Returns:
aiplatform_v1.PersistentResource if state is RUNNING.

Raises:
ValueError: Invalid cluster resource name.
RuntimeError: Service returns error.
RuntimeError: Cluster resource state is STOPPING.
RuntimeError: Cluster resource state is ERROR.
"""

client = create_persistent_resource_client()
request = GetPersistentResourceRequest(name=persistent_resource_name)

# TODO(b/277117901): Add test cases for polling and error handling
num_attempts = 0
while True:
try:
response = client.get_persistent_resource(request)
except exceptions.NotFound:
response = None
if num_attempts >= tolerance:
raise ValueError(
"[Ray on Vertex AI]: Invalid cluster_resource_name (404 not found)."
)
if response:
if response.error.message:
logging.error("[Ray on Vertex AI]: %s" % response.error.message)
> raise RuntimeError("[Ray on Vertex AI]: Cluster returned an error.")
E RuntimeError: [Ray on Vertex AI]: Cluster returned an error.

google/cloud/aiplatform/vertex_ray/util/_gapic_utils.py:115: RuntimeError
----------------------------- Captured stdout call -----------------------------
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 1; sleeping for 0:02:30 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 2; sleeping for 0:01:54.750000 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 3; sleeping for 0:01:27.783750 seconds
[Ray on Vertex AI]: Cluster State = State.PROVISIONING
Waiting for cluster provisioning; attempt 4; sleeping for 0:01:07.154569 seconds
------------------------------ Captured log call -------------------------------
ERROR root:_gapic_utils.py:114 [Ray on Vertex AI]: An internal error occurred on your cluster. Please try recreating one in a few minutes. If you still experience errors, contact Cloud AI Platform.
________ TestPrivateEndpoint.test_create_deploy_delete_private_endpoint ________
[gw13] linux -- Python 3.10.15 /tmpfs/src/github/python-aiplatform/.nox/system-3-10/bin/python

self =
shared_state = {'resources': [
resource name: projects/580378083368/locations/us-central1/endpoints/3522361365882732544]}

def test_create_deploy_delete_private_endpoint(self, shared_state):
# Collection of resources generated by this test, to be deleted during teardown
shared_state["resources"] = []

aiplatform.init(
project=e2e_base._PROJECT,
location=e2e_base._LOCATION,
)

private_endpoint = aiplatform.PrivateEndpoint.create(
display_name=self._make_display_name("private_endpoint_test"),
network=_PRIVATE_ENDPOINT_NETWORK,
)
shared_state["resources"].append(private_endpoint)

# Verify that the retrieved private Endpoint is the same
my_private_endpoint = aiplatform.PrivateEndpoint(
endpoint_name=private_endpoint.resource_name
)
assert private_endpoint.resource_name == my_private_endpoint.resource_name
assert private_endpoint.display_name == my_private_endpoint.display_name

# Verify the endpoint is in the private Endpoint list
list_private_endpoint = aiplatform.PrivateEndpoint.list()
assert private_endpoint.resource_name in [
private_endpoint.resource_name for private_endpoint in list_private_endpoint
]

# Retrieve permanent model, deploy to private Endpoint, then undeploy
my_model = aiplatform.Model(model_name=_MODEL_ID)

> my_private_endpoint.deploy(model=my_model)

tests/system/aiplatform/test_private_endpoint.py:64:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
google/cloud/aiplatform/models.py:4077: in deploy
self._deploy(
google/cloud/aiplatform/base.py:863: in wrapper
return method(*args, **kwargs)
google/cloud/aiplatform/models.py:1600: in _deploy
self._deploy_call(
google/cloud/aiplatform/models.py:2003: in _deploy_call
operation_future.result(timeout=None)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self =
timeout = None, retry = None, polling = None

def result(self, timeout=_DEFAULT_VALUE, retry=None, polling=None):
"""Get the result of the operation.

This method will poll for operation status periodically, blocking if
necessary. If you just want to make sure that this method does not block
for more than X seconds and you do not care about the nitty-gritty of
how this method operates, just call it with ``result(timeout=X)``. The
other parameters are for advanced use only.

Every call to this method is controlled by the following three
parameters, each of which has a specific, distinct role, even though all three
may look very similar: ``timeout``, ``retry`` and ``polling``. In most
cases users do not need to specify any custom values for any of these
parameters and may simply rely on default ones instead.

If you choose to specify custom parameters, please make sure you've
read the documentation below carefully.

First, please check :class:`google.api_core.retry.Retry`
class documentation for the proper definition of timeout and deadline
terms and for the definition the three different types of timeouts.
This class operates in terms of Retry Timeout and Polling Timeout. It
does not let customizing RPC timeout and the user is expected to rely on
default behavior for it.

The roles of each argument of this method are as follows:

``timeout`` (int): (Optional) The Polling Timeout as defined in
:class:`google.api_core.retry.Retry`. If the operation does not complete
within this timeout an exception will be thrown. This parameter affects
neither Retry Timeout nor RPC Timeout.

``retry`` (google.api_core.retry.Retry): (Optional) How to retry the
polling RPC. The ``retry.timeout`` property of this parameter is the
Retry Timeout as defined in :class:`google.api_core.retry.Retry`.
This parameter defines ONLY how the polling RPC call is retried
(i.e. what to do if the RPC we used for polling returned an error). It
does NOT define how the polling is done (i.e. how frequently and for
how long to call the polling RPC); use the ``polling`` parameter for that.
If a polling RPC throws and error and retrying it fails, the whole
future fails with the corresponding exception. If you want to tune which
server response error codes are not fatal for operation polling, use this
parameter to control that (``retry.predicate`` in particular).

``polling`` (google.api_core.retry.Retry): (Optional) How often and
for how long to call the polling RPC periodically (i.e. what to do if
a polling rpc returned successfully but its returned result indicates
that the long running operation is not completed yet, so we need to
check it again at some point in future). This parameter does NOT define
how to retry each individual polling RPC in case of an error; use the
``retry`` parameter for that. The ``polling.timeout`` of this parameter
is Polling Timeout as defined in as defined in
:class:`google.api_core.retry.Retry`.

For each of the arguments, there are also default values in place, which
will be used if a user does not specify their own. The default values
for the three parameters are not to be confused with the default values
for the corresponding arguments in this method (those serve as "not set"
markers for the resolution logic).

If ``timeout`` is provided (i.e.``timeout is not _DEFAULT VALUE``; note
the ``None`` value means "infinite timeout"), it will be used to control
the actual Polling Timeout. Otherwise, the ``polling.timeout`` value
will be used instead (see below for how the ``polling`` config itself
gets resolved). In other words, this parameter effectively overrides
the ``polling.timeout`` value if specified. This is so to preserve
backward compatibility.

If ``retry`` is provided (i.e. ``retry is not None``) it will be used to
control retry behavior for the polling RPC and the ``retry.timeout``
will determine the Retry Timeout. If not provided, the
polling RPC will be called with whichever default retry config was
specified for the polling RPC at the moment of the construction of the
polling RPC's client. For example, if the polling RPC is
``operations_client.get_operation()``, the ``retry`` parameter will be
controlling its retry behavior (not polling behavior) and, if not
specified, that specific method (``operations_client.get_operation()``)
will be retried according to the default retry config provided during
creation of ``operations_client`` client instead. This argument exists
mainly for backward compatibility; users are very unlikely to ever need
to set this parameter explicitly.

If ``polling`` is provided (i.e. ``polling is not None``), it will be used
to control the overall polling behavior and ``polling.timeout`` will
control Polling Timeout unless it is overridden by ``timeout`` parameter
as described above. If not provided, the``polling`` parameter specified
during construction of this future (the ``polling`` argument in the
constructor) will be used instead. Note: since the ``timeout`` argument may
override ``polling.timeout`` value, this parameter should be viewed as
coupled with the ``timeout`` parameter as described above.

Args:
timeout (int): (Optional) How long (in seconds) to wait for the
operation to complete. If None, wait indefinitely.
retry (google.api_core.retry.Retry): (Optional) How to retry the
polling RPC. This defines ONLY how the polling RPC call is
retried (i.e. what to do if the RPC we used for polling returned
an error). It does NOT define how the polling is done (i.e. how
frequently and for how long to call the polling RPC).
polling (google.api_core.retry.Retry): (Optional) How often and
for how long to call polling RPC periodically. This parameter
does NOT define how to retry each individual polling RPC call
(use the ``retry`` parameter for that).

Returns:
google.protobuf.Message: The Operation's result.

Raises:
google.api_core.GoogleAPICallError: If the operation errors or if
the timeout is reached before the operation completes.
"""

self._blocking_poll(timeout=timeout, retry=retry, polling=polling)

if self._exception is not None:
# pylint: disable=raising-bad-type
# Pylint doesn't recognize that this is valid in this case.
> raise self._exception
E google.api_core.exceptions.DeadlineExceeded: 504 Timeout. Model server logs can be found at https://console.cloud.google.com/logs/viewer?project=580378083368&resource=aiplatform.googleapis.com%252FEndpoint&advancedFilter=resource.type%3D%22aiplatform.googleapis.com%2FEndpoint%22%0Aresource.labels.endpoint_id%3D%223522361365882732544%22%0Aresource.labels.location%3D%22us-central1%22. If this error persists, contact ai-platform-unified-feedback@google.com 4: Timeout. Model server logs can be found at https://console.cloud.google.com/logs/viewer?project=580378083368&resource=aiplatform.googleapis.com%252FEndpoint&advancedFilter=resource.type%3D%22aiplatform.googleapis.com%2FEndpoint%22%0Aresource.labels.endpoint_id%3D%223522361365882732544%22%0Aresource.labels.location%3D%22us-central1%22. If this error persists, contact ai-platform-unified-feedback@google.com

.nox/system-3-10/lib/python3.10/site-packages/google/api_core/future/polling.py:261: DeadlineExceeded
------------------------------ Captured log call -------------------------------
INFO google.cloud.aiplatform.models:base.py:85 Creating PrivateEndpoint
INFO google.cloud.aiplatform.models:base.py:88 Create PrivateEndpoint backing LRO: projects/580378083368/locations/us-central1/endpoints/3522361365882732544/operations/1596306612253884416
INFO google.cloud.aiplatform.models:base.py:113 PrivateEndpoint created. Resource name: projects/580378083368/locations/us-central1/endpoints/3522361365882732544
INFO google.cloud.aiplatform.models:base.py:114 To use this PrivateEndpoint in another session:
INFO google.cloud.aiplatform.models:base.py:115 endpoint = aiplatform.PrivateEndpoint('projects/580378083368/locations/us-central1/endpoints/3522361365882732544')
INFO google.cloud.aiplatform.models:base.py:189 Deploying Model projects/580378083368/locations/us-central1/models/6430031960164270080 to PrivateEndpoint : projects/580378083368/locations/us-central1/endpoints/3522361365882732544
INFO google.cloud.aiplatform.models:models.py:1902 Using default machine_type: n1-standard-2
INFO google.cloud.aiplatform.models:base.py:209 Deploy PrivateEndpoint model backing LRO: projects/580378083368/locations/us-central1/endpoints/3522361365882732544/operations/4538283068833660928
---------------------------- Captured log teardown -----------------------------
INFO google.cloud.aiplatform.base:base.py:189 Deleting PrivateEndpoint : projects/580378083368/locations/us-central1/endpoints/3522361365882732544
INFO google.cloud.aiplatform.base:base.py:222 PrivateEndpoint deleted. . Resource name: projects/580378083368/locations/us-central1/endpoints/3522361365882732544
INFO google.cloud.aiplatform.base:base.py:156 Deleting PrivateEndpoint resource: projects/580378083368/locations/us-central1/endpoints/3522361365882732544
INFO google.cloud.aiplatform.base:base.py:161 Delete PrivateEndpoint backing LRO: projects/580378083368/locations/us-central1/operations/8052216678089490432
INFO google.cloud.aiplatform.base:base.py:174 PrivateEndpoint resource projects/580378083368/locations/us-central1/endpoints/3522361365882732544 deleted.
_________________ TestBatchPredictionJob.test_model_monitoring _________________
[gw3] linux -- Python 3.10.15 /tmpfs/src/github/python-aiplatform/.nox/system-3-10/bin/python

self =

def test_model_monitoring(self):
aiplatform.init(project=e2e_base._PROJECT, location=e2e_base._LOCATION)
model = aiplatform.Model(_PERMANENT_CHURN_MODEL_ID)
skew_detection_config = aiplatform.model_monitoring.SkewDetectionConfig(
data_source=_PERMANENT_CHURN_TRAINING_DATA,
target_field="churned",
skew_thresholds={"cnt_level_start_quickplay": 0.001},
data_format="csv",
)
drift_detection_config = aiplatform.model_monitoring.DriftDetectionConfig(
drift_thresholds={"cnt_user_engagement": 0.01}
)
mm_config = aiplatform.model_monitoring.ObjectiveConfig(
skew_detection_config=skew_detection_config,
drift_detection_config=drift_detection_config,
)

> bpj = aiplatform.BatchPredictionJob.create(
job_display_name=self._make_display_name(key=_TEST_JOB_DISPLAY_NAME),
model_name=model,
gcs_source=_PERMANENT_CHURN_TESTING_DATA,
gcs_destination_prefix=_PERMANENT_CHURN_GS_DEST,
machine_type=_TEST_MACHINE_TYPE,
starting_replica_count=_TEST_STARTING_REPLICA_COUNT,
max_replica_count=_TEST_MAX_REPLICA_COUNT,
generate_explanation=True,
sync=True,
model_monitoring_objective_config=mm_config,
)

tests/system/aiplatform/test_batch_prediction.py:63:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
google/cloud/aiplatform/jobs.py:620: in create
return cls._submit_impl(
google/cloud/aiplatform/jobs.py:1337: in _submit_impl
return cls._submit_and_optionally_wait_with_sync_support(
google/cloud/aiplatform/base.py:863: in wrapper
return method(*args, **kwargs)
google/cloud/aiplatform/jobs.py:1422: in _submit_and_optionally_wait_with_sync_support
batch_prediction_job._block_until_complete()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self =
resource name: projects/580378083368/locations/us-central1/batchPredictionJobs/8699668979188236288

def _block_until_complete(self):
"""Helper method to block and check on job until complete.

Raises:
RuntimeError: If job failed or cancelled.
"""

log_wait = _LOG_WAIT_TIME

previous_time = time.time()
while self.state not in _JOB_COMPLETE_STATES:
current_time = time.time()
if current_time - previous_time >= log_wait:
self._log_job_state()
log_wait = min(log_wait * _WAIT_TIME_MULTIPLIER, _MAX_WAIT_TIME)
previous_time = current_time
time.sleep(_JOB_WAIT_TIME)

self._log_job_state()

# Error is only populated when the job state is
# JOB_STATE_FAILED or JOB_STATE_CANCELLED.
if self._gca_resource.state in _JOB_ERROR_STATES:
> raise RuntimeError("Job failed with:\n%s" % self._gca_resource.error)
E RuntimeError: Job failed with:
E code: 14
E message: "Unable to prepare an infrastructure for serving within time"

google/cloud/aiplatform/jobs.py:251: RuntimeError
------------------------------ Captured log call -------------------------------
INFO google.cloud.aiplatform.jobs:base.py:85 Creating BatchPredictionJob
INFO google.cloud.aiplatform.jobs:base.py:113 BatchPredictionJob created. Resource name: projects/580378083368/locations/us-central1/batchPredictionJobs/8699668979188236288
INFO google.cloud.aiplatform.jobs:base.py:114 To use this BatchPredictionJob in another session:
INFO google.cloud.aiplatform.jobs:base.py:115 bpj = aiplatform.BatchPredictionJob('projects/580378083368/locations/us-central1/batchPredictionJobs/8699668979188236288')
INFO google.cloud.aiplatform.jobs:jobs.py:1418 View Batch Prediction Job:
https://console.cloud.google.com/ai/platform/locations/us-central1/batch-predictions/8699668979188236288?project=580378083368
INFO google.cloud.aiplatform.jobs:jobs.py:219 BatchPredictionJob projects/580378083368/locations/us-central1/batchPredictionJobs/8699668979188236288 current state:
JobState.JOB_STATE_RUNNING
INFO google.cloud.aiplatform.jobs:jobs.py:219 BatchPredictionJob projects/580378083368/locations/us-central1/batchPredictionJobs/8699668979188236288 current state:
JobState.JOB_STATE_RUNNING
INFO google.cloud.aiplatform.jobs:jobs.py:219 BatchPredictionJob projects/580378083368/locations/us-central1/batchPredictionJobs/8699668979188236288 current state:
JobState.JOB_STATE_RUNNING
INFO google.cloud.aiplatform.jobs:jobs.py:219 BatchPredictionJob projects/580378083368/locations/us-central1/batchPredictionJobs/8699668979188236288 current state:
JobState.JOB_STATE_RUNNING
INFO google.cloud.aiplatform.jobs:jobs.py:219 BatchPredictionJob projects/580378083368/locations/us-central1/batchPredictionJobs/8699668979188236288 current state:
JobState.JOB_STATE_RUNNING
INFO google.cloud.aiplatform.jobs:jobs.py:219 BatchPredictionJob projects/580378083368/locations/us-central1/batchPredictionJobs/8699668979188236288 current state:
JobState.JOB_STATE_RUNNING
INFO google.cloud.aiplatform.jobs:jobs.py:219 BatchPredictionJob projects/580378083368/locations/us-central1/batchPredictionJobs/8699668979188236288 current state:
JobState.JOB_STATE_RUNNING
INFO google.cloud.aiplatform.jobs:jobs.py:219 BatchPredictionJob projects/580378083368/locations/us-central1/batchPredictionJobs/8699668979188236288 current state:
JobState.JOB_STATE_RUNNING
INFO google.cloud.aiplatform.jobs:jobs.py:219 BatchPredictionJob projects/580378083368/locations/us-central1/batchPredictionJobs/8699668979188236288 current state:
JobState.JOB_STATE_RUNNING
INFO google.cloud.aiplatform.jobs:jobs.py:219 BatchPredictionJob projects/580378083368/locations/us-central1/batchPredictionJobs/8699668979188236288 current state:
JobState.JOB_STATE_RUNNING
INFO google.cloud.aiplatform.jobs:jobs.py:219 BatchPredictionJob projects/580378083368/locations/us-central1/batchPredictionJobs/8699668979188236288 current state:
JobState.JOB_STATE_RUNNING
INFO google.cloud.aiplatform.jobs:jobs.py:219 BatchPredictionJob projects/580378083368/locations/us-central1/batchPredictionJobs/8699668979188236288 current state:
JobState.JOB_STATE_RUNNING
INFO google.cloud.aiplatform.jobs:jobs.py:219 BatchPredictionJob projects/580378083368/locations/us-central1/batchPredictionJobs/8699668979188236288 current state:
JobState.JOB_STATE_RUNNING
INFO google.cloud.aiplatform.jobs:jobs.py:219 BatchPredictionJob projects/580378083368/locations/us-central1/batchPredictionJobs/8699668979188236288 current state:
JobState.JOB_STATE_FAILED
_________________ TestEndToEndTabular.test_end_to_end_tabular __________________
[gw9] linux -- Python 3.10.15 /tmpfs/src/github/python-aiplatform/.nox/system-3-10/bin/python

self =
shared_state = {'bucket': , 'resources': [}

def test_end_to_end_tabular(self, shared_state):
"""Build dataset, train a custom and AutoML model, deploy, and get predictions"""

# Collection of resources generated by this test, to be deleted during teardown
shared_state["resources"] = []

aiplatform.init(
project=e2e_base._PROJECT,
location=e2e_base._LOCATION,
staging_bucket=shared_state["staging_bucket_name"],
)

# Create and import to single managed dataset for both training jobs

ds = aiplatform.TabularDataset.create(
display_name=self._make_display_name("dataset"),
gcs_source=[_DATASET_TRAINING_SRC],
sync=False,
create_request_timeout=180.0,
)

shared_state["resources"].extend([ds])

# Define both training jobs

custom_job = aiplatform.CustomTrainingJob(
display_name=self._make_display_name("train-housing-custom"),
script_path=_LOCAL_TRAINING_SCRIPT_PATH,
container_uri="gcr.io/cloud-aiplatform/training/tf-cpu.2-2:latest",
requirements=["gcsfs==0.7.1"],
model_serving_container_image_uri="gcr.io/cloud-aiplatform/prediction/tf2-cpu.2-2:latest",
)

automl_job = aiplatform.AutoMLTabularTrainingJob(
display_name=self._make_display_name("train-housing-automl"),
optimization_prediction_type="regression",
optimization_objective="minimize-rmse",
)

# Kick off both training jobs, AutoML job will take approx one hour to run

custom_model = custom_job.run(
ds,
replica_count=1,
model_display_name=self._make_display_name("custom-housing-model"),
timeout=1234,
restart_job_on_worker_restart=True,
enable_web_access=True,
sync=False,
create_request_timeout=None,
disable_retries=True,
)

automl_model = automl_job.run(
dataset=ds,
target_column="median_house_value",
model_display_name=self._make_display_name("automl-housing-model"),
sync=False,
)

shared_state["resources"].extend(
[automl_job, automl_model, custom_job, custom_model]
)

# Deploy both models after training completes
custom_endpoint = custom_model.deploy(machine_type="n1-standard-4", sync=False)
automl_endpoint = automl_model.deploy(machine_type="n1-standard-4", sync=False)
shared_state["resources"].extend([automl_endpoint, custom_endpoint])

custom_batch_prediction_job = custom_model.batch_predict(
job_display_name=self._make_display_name("custom-housing-model"),
instances_format="jsonl",
machine_type="n1-standard-4",
gcs_source=_DATASET_BATCH_PREDICT_SRC,
gcs_destination_prefix=f'gs://{shared_state["staging_bucket_name"]}/bp_results/',
sync=False,
)

shared_state["resources"].append(custom_batch_prediction_job)

in_progress_done_check = custom_job.done()
custom_job.wait_for_resource_creation()

automl_job.wait_for_resource_creation()
# custom_batch_prediction_job.wait_for_resource_creation()

# Send online prediction with same instance to both deployed models
# This sample is taken from an observation where median_house_value = 94600
custom_endpoint.wait()

# Check scheduling is correctly set
assert (
custom_job._gca_resource.training_task_inputs["scheduling"]["timeout"]
== "1234s"
)
assert (
custom_job._gca_resource.training_task_inputs["scheduling"][
"restartJobOnWorkerRestart"
]
is True
)

custom_prediction = custom_endpoint.predict([_INSTANCE], timeout=180.0)

> custom_batch_prediction_job.wait()

tests/system/aiplatform/test_e2e_tabular.py:163:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
google/cloud/aiplatform/base.py:306: in wait
self._raise_future_exception()
google/cloud/aiplatform/base.py:272: in _raise_future_exception
raise self._exception
google/cloud/aiplatform/base.py:284: in _complete_future
future.result() # raises
/usr/local/lib/python3.10/concurrent/futures/_base.py:451: in result
return self.__get_result()
/usr/local/lib/python3.10/concurrent/futures/_base.py:403: in __get_result
raise self._exception
/usr/local/lib/python3.10/concurrent/futures/thread.py:58: in run
result = self.fn(*self.args, **self.kwargs)
google/cloud/aiplatform/base.py:374: in wait_for_dependencies_and_invoke
result = method(*args, **kwargs)
google/cloud/aiplatform/jobs.py:1422: in _submit_and_optionally_wait_with_sync_support
batch_prediction_job._block_until_complete()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self =
resource name: projects/580378083368/locations/us-central1/batchPredictionJobs/2133420722482053120

def _block_until_complete(self):
"""Helper method to block and check on job until complete.

Raises:
RuntimeError: If job failed or cancelled.
"""

log_wait = _LOG_WAIT_TIME

previous_time = time.time()
while self.state not in _JOB_COMPLETE_STATES:
current_time = time.time()
if current_time - previous_time >= log_wait:
self._log_job_state()
log_wait = min(log_wait * _WAIT_TIME_MULTIPLIER, _MAX_WAIT_TIME)
previous_time = current_time
time.sleep(_JOB_WAIT_TIME)

self._log_job_state()

# Error is only populated when the job state is
# JOB_STATE_FAILED or JOB_STATE_CANCELLED.
if self._gca_resource.state in _JOB_ERROR_STATES:
> raise RuntimeError("Job failed with:\n%s" % self._gca_resource.error)
E RuntimeError: Job failed with:
E code: 14
E message: "Unable to prepare an infrastructure for serving within time"

google/cloud/aiplatform/jobs.py:251: RuntimeError
------------------------------ Captured log call -------------------------------
INFO google.cloud.aiplatform.datasets.dataset:base.py:85 Creating TabularDataset
INFO google.cloud.aiplatform.datasets.dataset:base.py:88 Create TabularDataset backing LRO: projects/580378083368/locations/us-central1/datasets/4303176799568789504/operations/3160181582858289152
INFO google.cloud.aiplatform.datasets.dataset:base.py:113 TabularDataset created. Resource name: projects/580378083368/locations/us-central1/datasets/4303176799568789504
INFO google.cloud.aiplatform.datasets.dataset:base.py:114 To use this TabularDataset in another session:
INFO google.cloud.aiplatform.datasets.dataset:base.py:115 ds = aiplatform.TabularDataset('projects/580378083368/locations/us-central1/datasets/4303176799568789504')
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:6373 No column transformations provided, so now retrieving columns from dataset in order to set default column transformations.
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:6384 The column transformation of type 'auto' was set for the following columns: ['latitude', 'total_bedrooms', 'longitude', 'median_income', 'total_rooms', 'housing_median_age', 'population', 'households'].
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:550 No dataset split provided. The service will use a default split.
INFO google.cloud.aiplatform.utils.source_utils:source_utils.py:222 Training script copied to:
gs://temp-vertex-sdk-e2e-tabular-cc3f4132-a67e-48e2-95d0-2fb28afc6a8/aiplatform-2024-11-19-23:14:35.743-aiplatform_custom_trainer_script-0.1.tar.gz.
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:1629 Training Output directory:
gs://temp-vertex-sdk-e2e-tabular-cc3f4132-a67e-48e2-95d0-2fb28afc6a8/aiplatform-custom-training-2024-11-19-23:14:35.991
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:550 No dataset split provided. The service will use a default split.
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:852 View Training:
https://console.cloud.google.com/ai/platform/locations/us-central1/training/2964334853731909632?project=580378083368
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:852 View Training:
https://console.cloud.google.com/ai/platform/locations/us-central1/training/7272027897311789056?project=580378083368
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 AutoMLTabularTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/2964334853731909632 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 CustomTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/7272027897311789056 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 AutoMLTabularTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/2964334853731909632 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 CustomTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/7272027897311789056 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 AutoMLTabularTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/2964334853731909632 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 CustomTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/7272027897311789056 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 AutoMLTabularTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/2964334853731909632 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 CustomTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/7272027897311789056 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 AutoMLTabularTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/2964334853731909632 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 CustomTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/7272027897311789056 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:1707 View backing custom job:
https://console.cloud.google.com/ai/platform/locations/us-central1/training/5745721040005234688?project=580378083368
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 AutoMLTabularTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/2964334853731909632 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 CustomTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/7272027897311789056 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:base.py:222 CustomTrainingJob run completed. Resource name: projects/580378083368/locations/us-central1/trainingPipelines/7272027897311789056
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:989 Model available at projects/580378083368/locations/us-central1/models/4610248956729884672
INFO google.cloud.aiplatform.jobs:base.py:85 Creating BatchPredictionJob
INFO google.cloud.aiplatform.models:base.py:85 Creating Endpoint
INFO google.cloud.aiplatform.models:base.py:88 Create Endpoint backing LRO: projects/580378083368/locations/us-central1/endpoints/3119289199233073152/operations/8060097977437388800
INFO google.cloud.aiplatform.jobs:base.py:113 BatchPredictionJob created. Resource name: projects/580378083368/locations/us-central1/batchPredictionJobs/2133420722482053120
INFO google.cloud.aiplatform.jobs:base.py:114 To use this BatchPredictionJob in another session:
INFO google.cloud.aiplatform.jobs:base.py:115 bpj = aiplatform.BatchPredictionJob('projects/580378083368/locations/us-central1/batchPredictionJobs/2133420722482053120')
INFO google.cloud.aiplatform.jobs:jobs.py:1418 View Batch Prediction Job:
https://console.cloud.google.com/ai/platform/locations/us-central1/batch-predictions/2133420722482053120?project=580378083368
INFO google.cloud.aiplatform.models:base.py:113 Endpoint created. Resource name: projects/580378083368/locations/us-central1/endpoints/3119289199233073152
INFO google.cloud.aiplatform.models:base.py:114 To use this Endpoint in another session:
INFO google.cloud.aiplatform.models:base.py:115 endpoint = aiplatform.Endpoint('projects/580378083368/locations/us-central1/endpoints/3119289199233073152')
INFO google.cloud.aiplatform.models:base.py:189 Deploying model to Endpoint : projects/580378083368/locations/us-central1/endpoints/3119289199233073152
INFO google.cloud.aiplatform.models:base.py:209 Deploy Endpoint model backing LRO: projects/580378083368/locations/us-central1/endpoints/3119289199233073152/operations/4853535042749595648
INFO google.cloud.aiplatform.jobs:jobs.py:219 BatchPredictionJob projects/580378083368/locations/us-central1/batchPredictionJobs/2133420722482053120 current state:
JobState.JOB_STATE_RUNNING
INFO google.cloud.aiplatform.jobs:jobs.py:219 BatchPredictionJob projects/580378083368/locations/us-central1/batchPredictionJobs/2133420722482053120 current state:
JobState.JOB_STATE_RUNNING
INFO google.cloud.aiplatform.jobs:jobs.py:219 BatchPredictionJob projects/580378083368/locations/us-central1/batchPredictionJobs/2133420722482053120 current state:
JobState.JOB_STATE_RUNNING
INFO google.cloud.aiplatform.jobs:jobs.py:219 BatchPredictionJob projects/580378083368/locations/us-central1/batchPredictionJobs/2133420722482053120 current state:
JobState.JOB_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 AutoMLTabularTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/2964334853731909632 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.jobs:jobs.py:219 BatchPredictionJob projects/580378083368/locations/us-central1/batchPredictionJobs/2133420722482053120 current state:
JobState.JOB_STATE_RUNNING
INFO google.cloud.aiplatform.models:base.py:222 Endpoint model deployed. Resource name: projects/580378083368/locations/us-central1/endpoints/3119289199233073152
INFO google.cloud.aiplatform.jobs:jobs.py:219 BatchPredictionJob projects/580378083368/locations/us-central1/batchPredictionJobs/2133420722482053120 current state:
JobState.JOB_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 AutoMLTabularTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/2964334853731909632 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.jobs:jobs.py:219 BatchPredictionJob projects/580378083368/locations/us-central1/batchPredictionJobs/2133420722482053120 current state:
JobState.JOB_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 AutoMLTabularTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/2964334853731909632 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.jobs:jobs.py:219 BatchPredictionJob projects/580378083368/locations/us-central1/batchPredictionJobs/2133420722482053120 current state:
JobState.JOB_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 AutoMLTabularTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/2964334853731909632 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.jobs:jobs.py:219 BatchPredictionJob projects/580378083368/locations/us-central1/batchPredictionJobs/2133420722482053120 current state:
JobState.JOB_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 AutoMLTabularTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/2964334853731909632 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.jobs:jobs.py:219 BatchPredictionJob projects/580378083368/locations/us-central1/batchPredictionJobs/2133420722482053120 current state:
JobState.JOB_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 AutoMLTabularTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/2964334853731909632 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.jobs:jobs.py:219 BatchPredictionJob projects/580378083368/locations/us-central1/batchPredictionJobs/2133420722482053120 current state:
JobState.JOB_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 AutoMLTabularTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/2964334853731909632 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.jobs:jobs.py:219 BatchPredictionJob projects/580378083368/locations/us-central1/batchPredictionJobs/2133420722482053120 current state:
JobState.JOB_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 AutoMLTabularTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/2964334853731909632 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.jobs:jobs.py:219 BatchPredictionJob projects/580378083368/locations/us-central1/batchPredictionJobs/2133420722482053120 current state:
JobState.JOB_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 AutoMLTabularTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/2964334853731909632 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.jobs:jobs.py:219 BatchPredictionJob projects/580378083368/locations/us-central1/batchPredictionJobs/2133420722482053120 current state:
JobState.JOB_STATE_FAILED
---------------------------- Captured log teardown -----------------------------
ERROR root:e2e_base.py:210 Could not delete resource: is waiting for upstream dependencies to complete. due to: Endpoint resource has not been created.
Traceback (most recent call last):
File "/tmpfs/src/github/python-aiplatform/tests/system/aiplatform/e2e_base.py", line 204, in tear_down_resources
resource.delete(force=True)
File "/tmpfs/src/github/python-aiplatform/google/cloud/aiplatform/models.py", line 3126, in delete
self.undeploy_all(sync=sync)
File "/tmpfs/src/github/python-aiplatform/google/cloud/aiplatform/models.py", line 3087, in undeploy_all
self._sync_gca_resource()
File "/tmpfs/src/github/python-aiplatform/google/cloud/aiplatform/base.py", line 699, in _sync_gca_resource
self._gca_resource = self._get_gca_resource(resource_name=self.resource_name)
File "/tmpfs/src/github/python-aiplatform/google/cloud/aiplatform/base.py", line 729, in resource_name
self._assert_gca_resource_is_available()
File "/tmpfs/src/github/python-aiplatform/google/cloud/aiplatform/models.py", line 729, in _assert_gca_resource_is_available
super()._assert_gca_resource_is_available()
File "/tmpfs/src/github/python-aiplatform/google/cloud/aiplatform/base.py", line 1420, in _assert_gca_resource_is_available
raise RuntimeError(
RuntimeError: Endpoint resource has not been created.
INFO google.cloud.aiplatform.models:base.py:189 Undeploying Endpoint model: projects/580378083368/locations/us-central1/endpoints/3119289199233073152
INFO google.cloud.aiplatform.models:base.py:209 Undeploy Endpoint model backing LRO: projects/580378083368/locations/us-central1/endpoints/3119289199233073152/operations/396519524024713216
INFO google.cloud.aiplatform.models:base.py:222 Endpoint model undeployed. Resource name: projects/580378083368/locations/us-central1/endpoints/3119289199233073152
INFO google.cloud.aiplatform.base:base.py:189 Deleting Endpoint : projects/580378083368/locations/us-central1/endpoints/3119289199233073152
INFO google.cloud.aiplatform.base:base.py:222 Endpoint deleted. . Resource name: projects/580378083368/locations/us-central1/endpoints/3119289199233073152
INFO google.cloud.aiplatform.base:base.py:156 Deleting Endpoint resource: projects/580378083368/locations/us-central1/endpoints/3119289199233073152
INFO google.cloud.aiplatform.base:base.py:161 Delete Endpoint backing LRO: projects/580378083368/locations/us-central1/operations/8047713078462119936
INFO google.cloud.aiplatform.base:base.py:174 Endpoint resource projects/580378083368/locations/us-central1/endpoints/3119289199233073152 deleted.
INFO google.cloud.aiplatform.base:base.py:189 Deleting TabularDataset : projects/580378083368/locations/us-central1/datasets/4303176799568789504
INFO google.cloud.aiplatform.base:base.py:222 TabularDataset deleted. . Resource name: projects/580378083368/locations/us-central1/datasets/4303176799568789504
INFO google.cloud.aiplatform.base:base.py:156 Deleting TabularDataset resource: projects/580378083368/locations/us-central1/datasets/4303176799568789504
INFO google.cloud.aiplatform.base:base.py:161 Delete TabularDataset backing LRO: projects/580378083368/locations/us-central1/operations/6980359966775312384
INFO google.cloud.aiplatform.base:base.py:174 TabularDataset resource projects/580378083368/locations/us-central1/datasets/4303176799568789504 deleted.
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 AutoMLTabularTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/2964334853731909632 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 AutoMLTabularTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/2964334853731909632 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 AutoMLTabularTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/2964334853731909632 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 AutoMLTabularTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/2964334853731909632 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 AutoMLTabularTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/2964334853731909632 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 AutoMLTabularTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/2964334853731909632 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 AutoMLTabularTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/2964334853731909632 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 AutoMLTabularTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/2964334853731909632 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 AutoMLTabularTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/2964334853731909632 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 AutoMLTabularTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/2964334853731909632 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 AutoMLTabularTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/2964334853731909632 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 AutoMLTabularTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/2964334853731909632 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 AutoMLTabularTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/2964334853731909632 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:base.py:222 AutoMLTabularTrainingJob run completed. Resource name: projects/580378083368/locations/us-central1/trainingPipelines/2964334853731909632
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:989 Model available at projects/580378083368/locations/us-central1/models/2278510249658810368
INFO google.cloud.aiplatform.base:base.py:189 Deleting AutoMLTabularTrainingJob : projects/580378083368/locations/us-central1/trainingPipelines/2964334853731909632
INFO google.cloud.aiplatform.models:base.py:85 Creating Endpoint
INFO google.cloud.aiplatform.models:base.py:88 Create Endpoint backing LRO: projects/580378083368/locations/us-central1/endpoints/6247039140441882624/operations/6735335999548686336
INFO google.cloud.aiplatform.base:base.py:222 AutoMLTabularTrainingJob deleted. . Resource name: projects/580378083368/locations/us-central1/trainingPipelines/2964334853731909632
INFO google.cloud.aiplatform.base:base.py:156 Deleting AutoMLTabularTrainingJob resource: projects/580378083368/locations/us-central1/trainingPipelines/2964334853731909632
INFO google.cloud.aiplatform.base:base.py:161 Delete AutoMLTabularTrainingJob backing LRO: projects/580378083368/locations/us-central1/operations/7825207109372346368
INFO google.cloud.aiplatform.base:base.py:174 AutoMLTabularTrainingJob resource projects/580378083368/locations/us-central1/trainingPipelines/2964334853731909632 deleted.
INFO google.cloud.aiplatform.base:base.py:189 Deleting Model : projects/580378083368/locations/us-central1/models/2278510249658810368
INFO google.cloud.aiplatform.base:base.py:222 Model deleted. . Resource name: projects/580378083368/locations/us-central1/models/2278510249658810368
INFO google.cloud.aiplatform.base:base.py:156 Deleting Model resource: projects/580378083368/locations/us-central1/models/2278510249658810368
INFO google.cloud.aiplatform.base:base.py:161 Delete Model backing LRO: projects/580378083368/locations/us-central1/models/2278510249658810368/operations/5993367960939397120
INFO google.cloud.aiplatform.base:base.py:174 Model resource projects/580378083368/locations/us-central1/models/2278510249658810368 deleted.
INFO google.cloud.aiplatform.base:base.py:189 Deleting CustomTrainingJob : projects/580378083368/locations/us-central1/trainingPipelines/7272027897311789056
INFO google.cloud.aiplatform.base:base.py:222 CustomTrainingJob deleted. . Resource name: projects/580378083368/locations/us-central1/trainingPipelines/7272027897311789056
INFO google.cloud.aiplatform.base:base.py:156 Deleting CustomTrainingJob resource: projects/580378083368/locations/us-central1/trainingPipelines/7272027897311789056
INFO google.cloud.aiplatform.base:base.py:161 Delete CustomTrainingJob backing LRO: projects/580378083368/locations/us-central1/operations/5894288769137246208
INFO google.cloud.aiplatform.base:base.py:174 CustomTrainingJob resource projects/580378083368/locations/us-central1/trainingPipelines/7272027897311789056 deleted.
INFO google.cloud.aiplatform.base:base.py:189 Deleting Model : projects/580378083368/locations/us-central1/models/4610248956729884672
INFO google.cloud.aiplatform.base:base.py:222 Model deleted. . Resource name: projects/580378083368/locations/us-central1/models/4610248956729884672
INFO google.cloud.aiplatform.base:base.py:156 Deleting Model resource: projects/580378083368/locations/us-central1/models/4610248956729884672
INFO google.cloud.aiplatform.base:base.py:161 Delete Model backing LRO: projects/580378083368/locations/us-central1/models/4610248956729884672/operations/3284452785076043776
INFO google.cloud.aiplatform.base:base.py:174 Model resource projects/580378083368/locations/us-central1/models/4610248956729884672 deleted.
ERROR root:e2e_base.py:210 Could not delete resource:
resource name: projects/580378083368/locations/us-central1/batchPredictionJobs/2133420722482053120 due to: Job failed with:
code: 14
message: "Unable to prepare an infrastructure for serving within time"
Traceback (most recent call last):
File "/tmpfs/src/github/python-aiplatform/tests/system/aiplatform/e2e_base.py", line 208, in tear_down_resources
resource.delete()
File "/tmpfs/src/github/python-aiplatform/google/cloud/aiplatform/base.py", line 857, in wrapper
self._raise_future_exception()
File "/tmpfs/src/github/python-aiplatform/google/cloud/aiplatform/base.py", line 272, in _raise_future_exception
raise self._exception
File "/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/_pytest/runner.py", line 341, in from_call
result: TResult | None = func()
File "/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/_pytest/runner.py", line 242, in
lambda: runtest_hook(item=item, **kwds), when=when, reraise=reraise
File "/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pluggy/_hooks.py", line 513, in __call__
return self._hookexec(self.name, self._hookimpls.copy(), kwargs, firstresult)
File "/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pluggy/_manager.py", line 120, in _hookexec
return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
File "/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pluggy/_callers.py", line 139, in _multicall
raise exception.with_traceback(exception.__traceback__)
File "/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pluggy/_callers.py", line 122, in _multicall
teardown.throw(exception) # type: ignore[union-attr]
File "/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/_pytest/threadexception.py", line 92, in pytest_runtest_call
yield from thread_exception_runtest_hook()
File "/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/_pytest/threadexception.py", line 68, in thread_exception_runtest_hook
yield
File "/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pluggy/_callers.py", line 122, in _multicall
teardown.throw(exception) # type: ignore[union-attr]
File "/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/_pytest/unraisableexception.py", line 95, in pytest_runtest_call
yield from unraisable_exception_runtest_hook()
File "/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/_pytest/unraisableexception.py", line 70, in unraisable_exception_runtest_hook
yield
File "/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pluggy/_callers.py", line 122, in _multicall
teardown.throw(exception) # type: ignore[union-attr]
File "/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/_pytest/logging.py", line 846, in pytest_runtest_call
yield from self._runtest_for(item, "call")
File "/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/_pytest/logging.py", line 829, in _runtest_for
yield
File "/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pluggy/_callers.py", line 122, in _multicall
teardown.throw(exception) # type: ignore[union-attr]
File "/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/_pytest/capture.py", line 880, in pytest_runtest_call
return (yield)
File "/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pluggy/_callers.py", line 122, in _multicall
teardown.throw(exception) # type: ignore[union-attr]
File "/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/_pytest/skipping.py", line 257, in pytest_runtest_call
return (yield)
File "/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pluggy/_callers.py", line 103, in _multicall
res = hook_impl.function(*args)
File "/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/_pytest/runner.py", line 174, in pytest_runtest_call
item.runtest()
File "/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/_pytest/python.py", line 1627, in runtest
self.ihook.pytest_pyfunc_call(pyfuncitem=self)
File "/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pluggy/_hooks.py", line 513, in __call__
return self._hookexec(self.name, self._hookimpls.copy(), kwargs, firstresult)
File "/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pluggy/_manager.py", line 120, in _hookexec
return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
File "/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pluggy/_callers.py", line 182, in _multicall
return outcome.get_result()
File "/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pluggy/_result.py", line 100, in get_result
raise exc.with_traceback(exc.__traceback__)
File "/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pluggy/_callers.py", line 103, in _multicall
res = hook_impl.function(*args)
File "/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/_pytest/python.py", line 159, in pytest_pyfunc_call
result = testfunction(**testargs)
File "/tmpfs/src/github/python-aiplatform/tests/system/aiplatform/test_e2e_tabular.py", line 163, in test_end_to_end_tabular
custom_batch_prediction_job.wait()
File "/tmpfs/src/github/python-aiplatform/google/cloud/aiplatform/base.py", line 306, in wait
self._raise_future_exception()
File "/tmpfs/src/github/python-aiplatform/google/cloud/aiplatform/base.py", line 272, in _raise_future_exception
raise self._exception
File "/tmpfs/src/github/python-aiplatform/google/cloud/aiplatform/base.py", line 284, in _complete_future
future.result() # raises
File "/usr/local/lib/python3.10/concurrent/futures/_base.py", line 451, in result
return self.__get_result()
File "/usr/local/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
File "/usr/local/lib/python3.10/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "/tmpfs/src/github/python-aiplatform/google/cloud/aiplatform/base.py", line 374, in wait_for_dependencies_and_invoke
result = method(*args, **kwargs)
File "/tmpfs/src/github/python-aiplatform/google/cloud/aiplatform/jobs.py", line 1422, in _submit_and_optionally_wait_with_sync_support
batch_prediction_job._block_until_complete()
File "/tmpfs/src/github/python-aiplatform/google/cloud/aiplatform/jobs.py", line 251, in _block_until_complete
raise RuntimeError("Job failed with:\n%s" % self._gca_resource.error)
RuntimeError: Job failed with:
code: 14
message: "Unable to prepare an infrastructure for serving within time"

INFO google.cloud.aiplatform.models:base.py:113 Endpoint created. Resource name: projects/580378083368/locations/us-central1/endpoints/6247039140441882624
INFO google.cloud.aiplatform.models:base.py:114 To use this Endpoint in another session:
INFO google.cloud.aiplatform.models:base.py:115 endpoint = aiplatform.Endpoint('projects/580378083368/locations/us-central1/endpoints/6247039140441882624')
INFO google.cloud.aiplatform.models:base.py:189 Deploying model to Endpoint : projects/580378083368/locations/us-central1/endpoints/6247039140441882624
_ TestEndToEndForecasting3.test_end_to_end_forecasting[TemporalFusionTransformerForecastingTrainingJob] _
[gw6] linux -- Python 3.10.15 /tmpfs/src/github/python-aiplatform/.nox/system-3-10/bin/python

self =
shared_state = {'bucket': , 'staging_bucket_name': 'temp-ver...ting-69e0122b-9ad6-4a03-8339-5a25f3b', 'storage_client': }
training_job =

@pytest.mark.parametrize(
"training_job",
[
training_jobs.TemporalFusionTransformerForecastingTrainingJob,
],
)
def test_end_to_end_forecasting(self, shared_state, training_job):
"""Builds a dataset, trains models, and gets batch predictions."""
resources = []

aiplatform.init(
project=e2e_base._PROJECT,
location=e2e_base._LOCATION,
staging_bucket=shared_state["staging_bucket_name"],
)
try:
ds = aiplatform.TimeSeriesDataset.create(
display_name=self._make_display_name("dataset"),
bq_source=[_TRAINING_DATASET_BQ_PATH],
sync=False,
create_request_timeout=180.0,
)
resources.append(ds)

time_column = "date"
time_series_identifier_column = "store_name"
target_column = "sale_dollars"
column_specs = {
time_column: "timestamp",
target_column: "numeric",
"city": "categorical",
"zip_code": "categorical",
"county": "categorical",
}

job = training_job(
display_name=self._make_display_name("train-housing-forecasting"),
optimization_objective="minimize-rmse",
column_specs=column_specs,
)
resources.append(job)

model = job.run(
dataset=ds,
target_column=target_column,
time_column=time_column,
time_series_identifier_column=time_series_identifier_column,
available_at_forecast_columns=[time_column],
unavailable_at_forecast_columns=[target_column],
time_series_attribute_columns=["city", "zip_code", "county"],
forecast_horizon=30,
context_window=30,
data_granularity_unit="day",
data_granularity_count=1,
budget_milli_node_hours=1000,
holiday_regions=["GLOBAL"],
hierarchy_group_total_weight=1,
window_stride_length=1,
model_display_name=self._make_display_name("forecasting-liquor-model"),
sync=False,
)
resources.append(model)

batch_prediction_job = model.batch_predict(
job_display_name=self._make_display_name("forecasting-liquor-model"),
instances_format="bigquery",
predictions_format="csv",
machine_type="n1-standard-4",
bigquery_source=_PREDICTION_DATASET_BQ_PATH,
gcs_destination_prefix=(
f'gs://{shared_state["staging_bucket_name"]}/bp_results/'
),
sync=False,
)
resources.append(batch_prediction_job)

batch_prediction_job.wait()
model.wait()
assert job.state == pipeline_state.PipelineState.PIPELINE_STATE_SUCCEEDED
assert batch_prediction_job.state == job_state.JobState.JOB_STATE_SUCCEEDED
finally:
for resource in resources:
> resource.delete()

tests/system/aiplatform/test_e2e_forecasting.py:304:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
google/cloud/aiplatform/base.py:857: in wrapper
self._raise_future_exception()
google/cloud/aiplatform/base.py:272: in _raise_future_exception
raise self._exception
tests/system/aiplatform/test_e2e_forecasting.py:298: in test_end_to_end_forecasting
batch_prediction_job.wait()
google/cloud/aiplatform/base.py:306: in wait
self._raise_future_exception()
google/cloud/aiplatform/base.py:272: in _raise_future_exception
raise self._exception
google/cloud/aiplatform/base.py:284: in _complete_future
future.result() # raises
/usr/local/lib/python3.10/concurrent/futures/_base.py:451: in result
return self.__get_result()
/usr/local/lib/python3.10/concurrent/futures/_base.py:403: in __get_result
raise self._exception
/usr/local/lib/python3.10/concurrent/futures/thread.py:58: in run
result = self.fn(*self.args, **self.kwargs)
google/cloud/aiplatform/base.py:372: in wait_for_dependencies_and_invoke
future.result()
/usr/local/lib/python3.10/concurrent/futures/_base.py:458: in result
return self.__get_result()
/usr/local/lib/python3.10/concurrent/futures/_base.py:403: in __get_result
raise self._exception
google/cloud/aiplatform/base.py:284: in _complete_future
future.result() # raises
/usr/local/lib/python3.10/concurrent/futures/_base.py:451: in result
return self.__get_result()
/usr/local/lib/python3.10/concurrent/futures/_base.py:403: in __get_result
raise self._exception
google/cloud/aiplatform/base.py:284: in _complete_future
future.result() # raises
/usr/local/lib/python3.10/concurrent/futures/_base.py:451: in result
return self.__get_result()
/usr/local/lib/python3.10/concurrent/futures/_base.py:403: in __get_result
raise self._exception
google/cloud/aiplatform/base.py:284: in _complete_future
future.result() # raises
/usr/local/lib/python3.10/concurrent/futures/_base.py:451: in result
return self.__get_result()
/usr/local/lib/python3.10/concurrent/futures/_base.py:403: in __get_result
raise self._exception
/usr/local/lib/python3.10/concurrent/futures/thread.py:58: in run
result = self.fn(*self.args, **self.kwargs)
google/cloud/aiplatform/base.py:374: in wait_for_dependencies_and_invoke
result = method(*args, **kwargs)
google/cloud/aiplatform/training_jobs.py:2658: in _run
new_model = self._run_job(
google/cloud/aiplatform/training_jobs.py:854: in _run_job
model = self._get_model(block=block)
google/cloud/aiplatform/training_jobs.py:941: in _get_model
self._block_until_complete()
google/cloud/aiplatform/training_jobs.py:984: in _block_until_complete
self._raise_failure()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self =
resource name: projects/580378083368/locations/us-central1/trainingPipelines/1867708344467193856

def _raise_failure(self):
"""Helper method to raise failure if TrainingPipeline fails.

Raises:
RuntimeError: If training failed.
"""

if self._gca_resource.error.code != code_pb2.OK:
> raise RuntimeError("Training failed with:\n%s" % self._gca_resource.error)
E RuntimeError: Training failed with:
E code: 14
E message: "UNAVAILABLE"

google/cloud/aiplatform/training_jobs.py:1001: RuntimeError
------------------------------ Captured log call -------------------------------
INFO google.cloud.aiplatform.datasets.dataset:base.py:85 Creating TimeSeriesDataset
INFO google.cloud.aiplatform.datasets.dataset:base.py:88 Create TimeSeriesDataset backing LRO: projects/580378083368/locations/us-central1/datasets/5107069333054423040/operations/6373499916987138048
INFO google.cloud.aiplatform.datasets.dataset:base.py:113 TimeSeriesDataset created. Resource name: projects/580378083368/locations/us-central1/datasets/5107069333054423040
INFO google.cloud.aiplatform.datasets.dataset:base.py:114 To use this TimeSeriesDataset in another session:
INFO google.cloud.aiplatform.datasets.dataset:base.py:115 ds = aiplatform.TimeSeriesDataset('projects/580378083368/locations/us-central1/datasets/5107069333054423040')
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:550 No dataset split provided. The service will use a default split.
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:852 View Training:
https://console.cloud.google.com/ai/platform/locations/us-central1/training/1867708344467193856?project=580378083368
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 TemporalFusionTransformerForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/1867708344467193856 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 TemporalFusionTransformerForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/1867708344467193856 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 TemporalFusionTransformerForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/1867708344467193856 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 TemporalFusionTransformerForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/1867708344467193856 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 TemporalFusionTransformerForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/1867708344467193856 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 TemporalFusionTransformerForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/1867708344467193856 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 TemporalFusionTransformerForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/1867708344467193856 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 TemporalFusionTransformerForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/1867708344467193856 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 TemporalFusionTransformerForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/1867708344467193856 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 TemporalFusionTransformerForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/1867708344467193856 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 TemporalFusionTransformerForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/1867708344467193856 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 TemporalFusionTransformerForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/1867708344467193856 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 TemporalFusionTransformerForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/1867708344467193856 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 TemporalFusionTransformerForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/1867708344467193856 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 TemporalFusionTransformerForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/1867708344467193856 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 TemporalFusionTransformerForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/1867708344467193856 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 TemporalFusionTransformerForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/1867708344467193856 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 TemporalFusionTransformerForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/1867708344467193856 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 TemporalFusionTransformerForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/1867708344467193856 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 TemporalFusionTransformerForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/1867708344467193856 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 TemporalFusionTransformerForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/1867708344467193856 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 TemporalFusionTransformerForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/1867708344467193856 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 TemporalFusionTransformerForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/1867708344467193856 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 TemporalFusionTransformerForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/1867708344467193856 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 TemporalFusionTransformerForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/1867708344467193856 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 TemporalFusionTransformerForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/1867708344467193856 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 TemporalFusionTransformerForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/1867708344467193856 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.base:base.py:189 Deleting TimeSeriesDataset : projects/580378083368/locations/us-central1/datasets/5107069333054423040
INFO google.cloud.aiplatform.base:base.py:222 TimeSeriesDataset deleted. . Resource name: projects/580378083368/locations/us-central1/datasets/5107069333054423040
INFO google.cloud.aiplatform.base:base.py:156 Deleting TimeSeriesDataset resource: projects/580378083368/locations/us-central1/datasets/5107069333054423040
INFO google.cloud.aiplatform.base:base.py:161 Delete TimeSeriesDataset backing LRO: projects/580378083368/locations/us-central1/operations/6095824852462075904
INFO google.cloud.aiplatform.base:base.py:174 TimeSeriesDataset resource projects/580378083368/locations/us-central1/datasets/5107069333054423040 deleted.
_ TestEndToEndForecasting4.test_end_to_end_forecasting[TimeSeriesDenseEncoderForecastingTrainingJob] _
[gw7] linux -- Python 3.10.15 /tmpfs/src/github/python-aiplatform/.nox/system-3-10/bin/python

self =
shared_state = {'bucket': , 'staging_bucket_name': 'temp-ver...ting-9e527363-2c5f-4952-baba-3e6f3e4', 'storage_client': }
training_job =

@pytest.mark.parametrize(
"training_job",
[
training_jobs.TimeSeriesDenseEncoderForecastingTrainingJob,
],
)
def test_end_to_end_forecasting(self, shared_state, training_job):
"""Builds a dataset, trains models, and gets batch predictions."""
resources = []

aiplatform.init(
project=e2e_base._PROJECT,
location=e2e_base._LOCATION,
staging_bucket=shared_state["staging_bucket_name"],
)
try:
ds = aiplatform.TimeSeriesDataset.create(
display_name=self._make_display_name("dataset"),
bq_source=[_TRAINING_DATASET_BQ_PATH],
sync=False,
create_request_timeout=180.0,
)
resources.append(ds)

time_column = "date"
time_series_identifier_column = "store_name"
target_column = "sale_dollars"
column_specs = {
time_column: "timestamp",
target_column: "numeric",
"city": "categorical",
"zip_code": "categorical",
"county": "categorical",
}

job = training_job(
display_name=self._make_display_name("train-housing-forecasting"),
optimization_objective="minimize-rmse",
column_specs=column_specs,
)
resources.append(job)

model = job.run(
dataset=ds,
target_column=target_column,
time_column=time_column,
time_series_identifier_column=time_series_identifier_column,
available_at_forecast_columns=[time_column],
unavailable_at_forecast_columns=[target_column],
time_series_attribute_columns=["city", "zip_code", "county"],
forecast_horizon=30,
context_window=30,
data_granularity_unit="day",
data_granularity_count=1,
budget_milli_node_hours=1000,
holiday_regions=["GLOBAL"],
hierarchy_group_total_weight=1,
window_stride_length=1,
model_display_name=self._make_display_name("forecasting-liquor-model"),
sync=False,
)
resources.append(model)

batch_prediction_job = model.batch_predict(
job_display_name=self._make_display_name("forecasting-liquor-model"),
instances_format="bigquery",
predictions_format="csv",
machine_type="n1-standard-4",
bigquery_source=_PREDICTION_DATASET_BQ_PATH,
gcs_destination_prefix=(
f'gs://{shared_state["staging_bucket_name"]}/bp_results/'
),
sync=False,
)
resources.append(batch_prediction_job)

batch_prediction_job.wait()
model.wait()
assert job.state == pipeline_state.PipelineState.PIPELINE_STATE_SUCCEEDED
assert batch_prediction_job.state == job_state.JobState.JOB_STATE_SUCCEEDED
finally:
for resource in resources:
> resource.delete()

tests/system/aiplatform/test_e2e_forecasting.py:395:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
google/cloud/aiplatform/base.py:857: in wrapper
self._raise_future_exception()
google/cloud/aiplatform/base.py:272: in _raise_future_exception
raise self._exception
tests/system/aiplatform/test_e2e_forecasting.py:389: in test_end_to_end_forecasting
batch_prediction_job.wait()
google/cloud/aiplatform/base.py:306: in wait
self._raise_future_exception()
google/cloud/aiplatform/base.py:272: in _raise_future_exception
raise self._exception
google/cloud/aiplatform/base.py:284: in _complete_future
future.result() # raises
/usr/local/lib/python3.10/concurrent/futures/_base.py:451: in result
return self.__get_result()
/usr/local/lib/python3.10/concurrent/futures/_base.py:403: in __get_result
raise self._exception
/usr/local/lib/python3.10/concurrent/futures/thread.py:58: in run
result = self.fn(*self.args, **self.kwargs)
google/cloud/aiplatform/base.py:372: in wait_for_dependencies_and_invoke
future.result()
/usr/local/lib/python3.10/concurrent/futures/_base.py:458: in result
return self.__get_result()
/usr/local/lib/python3.10/concurrent/futures/_base.py:403: in __get_result
raise self._exception
google/cloud/aiplatform/base.py:284: in _complete_future
future.result() # raises
/usr/local/lib/python3.10/concurrent/futures/_base.py:451: in result
return self.__get_result()
/usr/local/lib/python3.10/concurrent/futures/_base.py:403: in __get_result
raise self._exception
google/cloud/aiplatform/base.py:284: in _complete_future
future.result() # raises
/usr/local/lib/python3.10/concurrent/futures/_base.py:451: in result
return self.__get_result()
/usr/local/lib/python3.10/concurrent/futures/_base.py:403: in __get_result
raise self._exception
google/cloud/aiplatform/base.py:284: in _complete_future
future.result() # raises
/usr/local/lib/python3.10/concurrent/futures/_base.py:451: in result
return self.__get_result()
/usr/local/lib/python3.10/concurrent/futures/_base.py:403: in __get_result
raise self._exception
/usr/local/lib/python3.10/concurrent/futures/thread.py:58: in run
result = self.fn(*self.args, **self.kwargs)
google/cloud/aiplatform/base.py:374: in wait_for_dependencies_and_invoke
result = method(*args, **kwargs)
google/cloud/aiplatform/training_jobs.py:2658: in _run
new_model = self._run_job(
google/cloud/aiplatform/training_jobs.py:854: in _run_job
model = self._get_model(block=block)
google/cloud/aiplatform/training_jobs.py:941: in _get_model
self._block_until_complete()
google/cloud/aiplatform/training_jobs.py:984: in _block_until_complete
self._raise_failure()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self =
resource name: projects/580378083368/locations/us-central1/trainingPipelines/1039046013031022592

def _raise_failure(self):
"""Helper method to raise failure if TrainingPipeline fails.

Raises:
RuntimeError: If training failed.
"""

if self._gca_resource.error.code != code_pb2.OK:
> raise RuntimeError("Training failed with:\n%s" % self._gca_resource.error)
E RuntimeError: Training failed with:
E code: 14
E message: "UNAVAILABLE"

google/cloud/aiplatform/training_jobs.py:1001: RuntimeError
------------------------------ Captured log call -------------------------------
INFO google.cloud.aiplatform.datasets.dataset:base.py:85 Creating TimeSeriesDataset
INFO google.cloud.aiplatform.datasets.dataset:base.py:88 Create TimeSeriesDataset backing LRO: projects/580378083368/locations/us-central1/datasets/6066336053684338688/operations/4997650230825451520
INFO google.cloud.aiplatform.datasets.dataset:base.py:113 TimeSeriesDataset created. Resource name: projects/580378083368/locations/us-central1/datasets/6066336053684338688
INFO google.cloud.aiplatform.datasets.dataset:base.py:114 To use this TimeSeriesDataset in another session:
INFO google.cloud.aiplatform.datasets.dataset:base.py:115 ds = aiplatform.TimeSeriesDataset('projects/580378083368/locations/us-central1/datasets/6066336053684338688')
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:550 No dataset split provided. The service will use a default split.
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:852 View Training:
https://console.cloud.google.com/ai/platform/locations/us-central1/training/1039046013031022592?project=580378083368
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 TimeSeriesDenseEncoderForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/1039046013031022592 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 TimeSeriesDenseEncoderForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/1039046013031022592 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 TimeSeriesDenseEncoderForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/1039046013031022592 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 TimeSeriesDenseEncoderForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/1039046013031022592 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 TimeSeriesDenseEncoderForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/1039046013031022592 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 TimeSeriesDenseEncoderForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/1039046013031022592 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 TimeSeriesDenseEncoderForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/1039046013031022592 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 TimeSeriesDenseEncoderForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/1039046013031022592 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 TimeSeriesDenseEncoderForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/1039046013031022592 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 TimeSeriesDenseEncoderForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/1039046013031022592 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 TimeSeriesDenseEncoderForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/1039046013031022592 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 TimeSeriesDenseEncoderForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/1039046013031022592 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 TimeSeriesDenseEncoderForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/1039046013031022592 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 TimeSeriesDenseEncoderForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/1039046013031022592 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 TimeSeriesDenseEncoderForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/1039046013031022592 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 TimeSeriesDenseEncoderForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/1039046013031022592 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 TimeSeriesDenseEncoderForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/1039046013031022592 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 TimeSeriesDenseEncoderForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/1039046013031022592 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 TimeSeriesDenseEncoderForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/1039046013031022592 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 TimeSeriesDenseEncoderForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/1039046013031022592 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 TimeSeriesDenseEncoderForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/1039046013031022592 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 TimeSeriesDenseEncoderForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/1039046013031022592 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 TimeSeriesDenseEncoderForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/1039046013031022592 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 TimeSeriesDenseEncoderForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/1039046013031022592 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 TimeSeriesDenseEncoderForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/1039046013031022592 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 TimeSeriesDenseEncoderForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/1039046013031022592 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 TimeSeriesDenseEncoderForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/1039046013031022592 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.base:base.py:189 Deleting TimeSeriesDataset : projects/580378083368/locations/us-central1/datasets/6066336053684338688
INFO google.cloud.aiplatform.base:base.py:222 TimeSeriesDataset deleted. . Resource name: projects/580378083368/locations/us-central1/datasets/6066336053684338688
INFO google.cloud.aiplatform.base:base.py:156 Deleting TimeSeriesDataset resource: projects/580378083368/locations/us-central1/datasets/6066336053684338688
INFO google.cloud.aiplatform.base:base.py:161 Delete TimeSeriesDataset backing LRO: projects/580378083368/locations/us-central1/operations/3694280351166758912
INFO google.cloud.aiplatform.base:base.py:174 TimeSeriesDataset resource projects/580378083368/locations/us-central1/datasets/6066336053684338688 deleted.
_ TestEndToEndForecasting1.test_end_to_end_forecasting[AutoMLForecastingTrainingJob] _
[gw4] linux -- Python 3.10.15 /tmpfs/src/github/python-aiplatform/.nox/system-3-10/bin/python

self =
shared_state = {'bucket': , 'staging_bucket_name': 'temp-ver...ting-57a4ecb8-9699-414f-8e61-c9b3b7d', 'storage_client': }
training_job =

@pytest.mark.parametrize(
"training_job",
[
training_jobs.AutoMLForecastingTrainingJob,
],
)
def test_end_to_end_forecasting(self, shared_state, training_job):
"""Builds a dataset, trains models, and gets batch predictions."""
resources = []

aiplatform.init(
project=e2e_base._PROJECT,
location=e2e_base._LOCATION,
staging_bucket=shared_state["staging_bucket_name"],
)
try:
ds = aiplatform.TimeSeriesDataset.create(
display_name=self._make_display_name("dataset"),
bq_source=[_TRAINING_DATASET_BQ_PATH],
sync=False,
create_request_timeout=180.0,
)
resources.append(ds)

time_column = "date"
time_series_identifier_column = "store_name"
target_column = "sale_dollars"
column_specs = {
time_column: "timestamp",
target_column: "numeric",
"city": "categorical",
"zip_code": "categorical",
"county": "categorical",
}

job = training_job(
display_name=self._make_display_name("train-housing-forecasting"),
optimization_objective="minimize-rmse",
column_specs=column_specs,
)
resources.append(job)

model = job.run(
dataset=ds,
target_column=target_column,
time_column=time_column,
time_series_identifier_column=time_series_identifier_column,
available_at_forecast_columns=[time_column],
unavailable_at_forecast_columns=[target_column],
time_series_attribute_columns=["city", "zip_code", "county"],
forecast_horizon=30,
context_window=30,
data_granularity_unit="day",
data_granularity_count=1,
budget_milli_node_hours=1000,
holiday_regions=["GLOBAL"],
hierarchy_group_total_weight=1,
window_stride_length=1,
model_display_name=self._make_display_name("forecasting-liquor-model"),
sync=False,
)
resources.append(model)

batch_prediction_job = model.batch_predict(
job_display_name=self._make_display_name("forecasting-liquor-model"),
instances_format="bigquery",
predictions_format="csv",
machine_type="n1-standard-4",
bigquery_source=_PREDICTION_DATASET_BQ_PATH,
gcs_destination_prefix=(
f'gs://{shared_state["staging_bucket_name"]}/bp_results/'
),
sync=False,
)
resources.append(batch_prediction_job)

batch_prediction_job.wait()
model.wait()
assert job.state == pipeline_state.PipelineState.PIPELINE_STATE_SUCCEEDED
assert batch_prediction_job.state == job_state.JobState.JOB_STATE_SUCCEEDED
finally:
for resource in resources:
> resource.delete()

tests/system/aiplatform/test_e2e_forecasting.py:122:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
google/cloud/aiplatform/base.py:857: in wrapper
self._raise_future_exception()
google/cloud/aiplatform/base.py:272: in _raise_future_exception
raise self._exception
tests/system/aiplatform/test_e2e_forecasting.py:116: in test_end_to_end_forecasting
batch_prediction_job.wait()
google/cloud/aiplatform/base.py:306: in wait
self._raise_future_exception()
google/cloud/aiplatform/base.py:272: in _raise_future_exception
raise self._exception
google/cloud/aiplatform/base.py:284: in _complete_future
future.result() # raises
/usr/local/lib/python3.10/concurrent/futures/_base.py:451: in result
return self.__get_result()
/usr/local/lib/python3.10/concurrent/futures/_base.py:403: in __get_result
raise self._exception
/usr/local/lib/python3.10/concurrent/futures/thread.py:58: in run
result = self.fn(*self.args, **self.kwargs)
google/cloud/aiplatform/base.py:372: in wait_for_dependencies_and_invoke
future.result()
/usr/local/lib/python3.10/concurrent/futures/_base.py:458: in result
return self.__get_result()
/usr/local/lib/python3.10/concurrent/futures/_base.py:403: in __get_result
raise self._exception
google/cloud/aiplatform/base.py:284: in _complete_future
future.result() # raises
/usr/local/lib/python3.10/concurrent/futures/_base.py:451: in result
return self.__get_result()
/usr/local/lib/python3.10/concurrent/futures/_base.py:403: in __get_result
raise self._exception
google/cloud/aiplatform/base.py:284: in _complete_future
future.result() # raises
/usr/local/lib/python3.10/concurrent/futures/_base.py:451: in result
return self.__get_result()
/usr/local/lib/python3.10/concurrent/futures/_base.py:403: in __get_result
raise self._exception
google/cloud/aiplatform/base.py:284: in _complete_future
future.result() # raises
/usr/local/lib/python3.10/concurrent/futures/_base.py:451: in result
return self.__get_result()
/usr/local/lib/python3.10/concurrent/futures/_base.py:403: in __get_result
raise self._exception
/usr/local/lib/python3.10/concurrent/futures/thread.py:58: in run
result = self.fn(*self.args, **self.kwargs)
google/cloud/aiplatform/base.py:374: in wait_for_dependencies_and_invoke
result = method(*args, **kwargs)
google/cloud/aiplatform/training_jobs.py:2658: in _run
new_model = self._run_job(
google/cloud/aiplatform/training_jobs.py:854: in _run_job
model = self._get_model(block=block)
google/cloud/aiplatform/training_jobs.py:941: in _get_model
self._block_until_complete()
google/cloud/aiplatform/training_jobs.py:984: in _block_until_complete
self._raise_failure()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self =
resource name: projects/580378083368/locations/us-central1/trainingPipelines/8990151155153633280

def _raise_failure(self):
"""Helper method to raise failure if TrainingPipeline fails.

Raises:
RuntimeError: If training failed.
"""

if self._gca_resource.error.code != code_pb2.OK:
> raise RuntimeError("Training failed with:\n%s" % self._gca_resource.error)
E RuntimeError: Training failed with:
E code: 14
E message: "UNAVAILABLE"

google/cloud/aiplatform/training_jobs.py:1001: RuntimeError
------------------------------ Captured log call -------------------------------
INFO google.cloud.aiplatform.datasets.dataset:base.py:85 Creating TimeSeriesDataset
INFO google.cloud.aiplatform.datasets.dataset:base.py:88 Create TimeSeriesDataset backing LRO: projects/580378083368/locations/us-central1/datasets/5122831931750219776/operations/5343301502226137088
INFO google.cloud.aiplatform.datasets.dataset:base.py:113 TimeSeriesDataset created. Resource name: projects/580378083368/locations/us-central1/datasets/5122831931750219776
INFO google.cloud.aiplatform.datasets.dataset:base.py:114 To use this TimeSeriesDataset in another session:
INFO google.cloud.aiplatform.datasets.dataset:base.py:115 ds = aiplatform.TimeSeriesDataset('projects/580378083368/locations/us-central1/datasets/5122831931750219776')
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:550 No dataset split provided. The service will use a default split.
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:852 View Training:
https://console.cloud.google.com/ai/platform/locations/us-central1/training/8990151155153633280?project=580378083368
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 AutoMLForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/8990151155153633280 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 AutoMLForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/8990151155153633280 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 AutoMLForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/8990151155153633280 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 AutoMLForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/8990151155153633280 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 AutoMLForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/8990151155153633280 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 AutoMLForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/8990151155153633280 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 AutoMLForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/8990151155153633280 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 AutoMLForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/8990151155153633280 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 AutoMLForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/8990151155153633280 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 AutoMLForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/8990151155153633280 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 AutoMLForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/8990151155153633280 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 AutoMLForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/8990151155153633280 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 AutoMLForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/8990151155153633280 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 AutoMLForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/8990151155153633280 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 AutoMLForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/8990151155153633280 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 AutoMLForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/8990151155153633280 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 AutoMLForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/8990151155153633280 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 AutoMLForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/8990151155153633280 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 AutoMLForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/8990151155153633280 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 AutoMLForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/8990151155153633280 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 AutoMLForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/8990151155153633280 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 AutoMLForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/8990151155153633280 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 AutoMLForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/8990151155153633280 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 AutoMLForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/8990151155153633280 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 AutoMLForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/8990151155153633280 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 AutoMLForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/8990151155153633280 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 AutoMLForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/8990151155153633280 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.base:base.py:189 Deleting TimeSeriesDataset : projects/580378083368/locations/us-central1/datasets/5122831931750219776
INFO google.cloud.aiplatform.base:base.py:222 TimeSeriesDataset deleted. . Resource name: projects/580378083368/locations/us-central1/datasets/5122831931750219776
INFO google.cloud.aiplatform.base:base.py:156 Deleting TimeSeriesDataset resource: projects/580378083368/locations/us-central1/datasets/5122831931750219776
INFO google.cloud.aiplatform.base:base.py:161 Delete TimeSeriesDataset backing LRO: projects/580378083368/locations/us-central1/operations/1829790105435373568
INFO google.cloud.aiplatform.base:base.py:174 TimeSeriesDataset resource projects/580378083368/locations/us-central1/datasets/5122831931750219776 deleted.
_ TestEndToEndForecasting2.test_end_to_end_forecasting[SequenceToSequencePlusForecastingTrainingJob] _
[gw5] linux -- Python 3.10.15 /tmpfs/src/github/python-aiplatform/.nox/system-3-10/bin/python

self =
shared_state = {'bucket': , 'staging_bucket_name': 'temp-ver...ting-563c7c19-a9e8-4566-b7d2-d7508ab', 'storage_client': }
training_job =

@pytest.mark.parametrize(
"training_job",
[
training_jobs.SequenceToSequencePlusForecastingTrainingJob,
],
)
def test_end_to_end_forecasting(self, shared_state, training_job):
"""Builds a dataset, trains models, and gets batch predictions."""
resources = []

aiplatform.init(
project=e2e_base._PROJECT,
location=e2e_base._LOCATION,
staging_bucket=shared_state["staging_bucket_name"],
)
try:
ds = aiplatform.TimeSeriesDataset.create(
display_name=self._make_display_name("dataset"),
bq_source=[_TRAINING_DATASET_BQ_PATH],
sync=False,
create_request_timeout=180.0,
)
resources.append(ds)

time_column = "date"
time_series_identifier_column = "store_name"
target_column = "sale_dollars"
column_specs = {
time_column: "timestamp",
target_column: "numeric",
"city": "categorical",
"zip_code": "categorical",
"county": "categorical",
}

job = training_job(
display_name=self._make_display_name("train-housing-forecasting"),
optimization_objective="minimize-rmse",
column_specs=column_specs,
)
resources.append(job)

model = job.run(
dataset=ds,
target_column=target_column,
time_column=time_column,
time_series_identifier_column=time_series_identifier_column,
available_at_forecast_columns=[time_column],
unavailable_at_forecast_columns=[target_column],
time_series_attribute_columns=["city", "zip_code", "county"],
forecast_horizon=30,
context_window=30,
data_granularity_unit="day",
data_granularity_count=1,
budget_milli_node_hours=1000,
holiday_regions=["GLOBAL"],
hierarchy_group_total_weight=1,
window_stride_length=1,
model_display_name=self._make_display_name("forecasting-liquor-model"),
sync=False,
)
resources.append(model)

batch_prediction_job = model.batch_predict(
job_display_name=self._make_display_name("forecasting-liquor-model"),
instances_format="bigquery",
predictions_format="csv",
machine_type="n1-standard-4",
bigquery_source=_PREDICTION_DATASET_BQ_PATH,
gcs_destination_prefix=(
f'gs://{shared_state["staging_bucket_name"]}/bp_results/'
),
sync=False,
)
resources.append(batch_prediction_job)

batch_prediction_job.wait()
model.wait()
assert job.state == pipeline_state.PipelineState.PIPELINE_STATE_SUCCEEDED
assert batch_prediction_job.state == job_state.JobState.JOB_STATE_SUCCEEDED
finally:
for resource in resources:
> resource.delete()

tests/system/aiplatform/test_e2e_forecasting.py:213:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
google/cloud/aiplatform/base.py:857: in wrapper
self._raise_future_exception()
google/cloud/aiplatform/base.py:272: in _raise_future_exception
raise self._exception
tests/system/aiplatform/test_e2e_forecasting.py:207: in test_end_to_end_forecasting
batch_prediction_job.wait()
google/cloud/aiplatform/base.py:306: in wait
self._raise_future_exception()
google/cloud/aiplatform/base.py:272: in _raise_future_exception
raise self._exception
google/cloud/aiplatform/base.py:284: in _complete_future
future.result() # raises
/usr/local/lib/python3.10/concurrent/futures/_base.py:451: in result
return self.__get_result()
/usr/local/lib/python3.10/concurrent/futures/_base.py:403: in __get_result
raise self._exception
/usr/local/lib/python3.10/concurrent/futures/thread.py:58: in run
result = self.fn(*self.args, **self.kwargs)
google/cloud/aiplatform/base.py:372: in wait_for_dependencies_and_invoke
future.result()
/usr/local/lib/python3.10/concurrent/futures/_base.py:458: in result
return self.__get_result()
/usr/local/lib/python3.10/concurrent/futures/_base.py:403: in __get_result
raise self._exception
google/cloud/aiplatform/base.py:284: in _complete_future
future.result() # raises
/usr/local/lib/python3.10/concurrent/futures/_base.py:451: in result
return self.__get_result()
/usr/local/lib/python3.10/concurrent/futures/_base.py:403: in __get_result
raise self._exception
google/cloud/aiplatform/base.py:284: in _complete_future
future.result() # raises
/usr/local/lib/python3.10/concurrent/futures/_base.py:451: in result
return self.__get_result()
/usr/local/lib/python3.10/concurrent/futures/_base.py:403: in __get_result
raise self._exception
google/cloud/aiplatform/base.py:284: in _complete_future
future.result() # raises
/usr/local/lib/python3.10/concurrent/futures/_base.py:451: in result
return self.__get_result()
/usr/local/lib/python3.10/concurrent/futures/_base.py:403: in __get_result
raise self._exception
/usr/local/lib/python3.10/concurrent/futures/thread.py:58: in run
result = self.fn(*self.args, **self.kwargs)
google/cloud/aiplatform/base.py:374: in wait_for_dependencies_and_invoke
result = method(*args, **kwargs)
google/cloud/aiplatform/training_jobs.py:2658: in _run
new_model = self._run_job(
google/cloud/aiplatform/training_jobs.py:854: in _run_job
model = self._get_model(block=block)
google/cloud/aiplatform/training_jobs.py:941: in _get_model
self._block_until_complete()
google/cloud/aiplatform/training_jobs.py:984: in _block_until_complete
self._raise_failure()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self =
resource name: projects/580378083368/locations/us-central1/trainingPipelines/6986049320973762560

def _raise_failure(self):
"""Helper method to raise failure if TrainingPipeline fails.

Raises:
RuntimeError: If training failed.
"""

if self._gca_resource.error.code != code_pb2.OK:
> raise RuntimeError("Training failed with:\n%s" % self._gca_resource.error)
E RuntimeError: Training failed with:
E code: 4
E message: "DEADLINE_EXCEEDED"

google/cloud/aiplatform/training_jobs.py:1001: RuntimeError
------------------------------ Captured log call -------------------------------
INFO google.cloud.aiplatform.datasets.dataset:base.py:85 Creating TimeSeriesDataset
INFO google.cloud.aiplatform.datasets.dataset:base.py:88 Create TimeSeriesDataset backing LRO: projects/580378083368/locations/us-central1/datasets/8223560275194806272/operations/1115547352032083968
INFO google.cloud.aiplatform.datasets.dataset:base.py:113 TimeSeriesDataset created. Resource name: projects/580378083368/locations/us-central1/datasets/8223560275194806272
INFO google.cloud.aiplatform.datasets.dataset:base.py:114 To use this TimeSeriesDataset in another session:
INFO google.cloud.aiplatform.datasets.dataset:base.py:115 ds = aiplatform.TimeSeriesDataset('projects/580378083368/locations/us-central1/datasets/8223560275194806272')
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:550 No dataset split provided. The service will use a default split.
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:852 View Training:
https://console.cloud.google.com/ai/platform/locations/us-central1/training/6986049320973762560?project=580378083368
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 SequenceToSequencePlusForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/6986049320973762560 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 SequenceToSequencePlusForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/6986049320973762560 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 SequenceToSequencePlusForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/6986049320973762560 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 SequenceToSequencePlusForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/6986049320973762560 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 SequenceToSequencePlusForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/6986049320973762560 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 SequenceToSequencePlusForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/6986049320973762560 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 SequenceToSequencePlusForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/6986049320973762560 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 SequenceToSequencePlusForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/6986049320973762560 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 SequenceToSequencePlusForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/6986049320973762560 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 SequenceToSequencePlusForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/6986049320973762560 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 SequenceToSequencePlusForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/6986049320973762560 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 SequenceToSequencePlusForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/6986049320973762560 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 SequenceToSequencePlusForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/6986049320973762560 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 SequenceToSequencePlusForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/6986049320973762560 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 SequenceToSequencePlusForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/6986049320973762560 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 SequenceToSequencePlusForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/6986049320973762560 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 SequenceToSequencePlusForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/6986049320973762560 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 SequenceToSequencePlusForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/6986049320973762560 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 SequenceToSequencePlusForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/6986049320973762560 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 SequenceToSequencePlusForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/6986049320973762560 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 SequenceToSequencePlusForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/6986049320973762560 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 SequenceToSequencePlusForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/6986049320973762560 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 SequenceToSequencePlusForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/6986049320973762560 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 SequenceToSequencePlusForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/6986049320973762560 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 SequenceToSequencePlusForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/6986049320973762560 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 SequenceToSequencePlusForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/6986049320973762560 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.training_jobs:training_jobs.py:971 SequenceToSequencePlusForecastingTrainingJob projects/580378083368/locations/us-central1/trainingPipelines/6986049320973762560 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO google.cloud.aiplatform.base:base.py:189 Deleting TimeSeriesDataset : projects/580378083368/locations/us-central1/datasets/8223560275194806272
INFO google.cloud.aiplatform.base:base.py:222 TimeSeriesDataset deleted. . Resource name: projects/580378083368/locations/us-central1/datasets/8223560275194806272
INFO google.cloud.aiplatform.base:base.py:156 Deleting TimeSeriesDataset resource: projects/580378083368/locations/us-central1/datasets/8223560275194806272
INFO google.cloud.aiplatform.base:base.py:161 Delete TimeSeriesDataset backing LRO: projects/580378083368/locations/us-central1/operations/5874022570814078976
INFO google.cloud.aiplatform.base:base.py:174 TimeSeriesDataset resource projects/580378083368/locations/us-central1/datasets/8223560275194806272 deleted.
______ TestModelDeploymentMonitoring.test_mdm_two_models_one_valid_config ______
[gw5] linux -- Python 3.10.15 /tmpfs/src/github/python-aiplatform/.nox/system-3-10/bin/python

args = (parent: "projects/ucaip-sample-tests/locations/us-central1"
model_deployment_monitoring_job {
display_name: "temp_e...ocacyorg.joonix.net"
}
enable_logging: true
}
sample_predict_instance {
null_value: NULL_VALUE
}
}
,)
kwargs = {'metadata': [('x-goog-request-params', 'parent=projects/ucaip-sample-tests/locations/us-central1'), ('x-goog-api-clie...0+top_google_constructor_method+google.cloud.aiplatform.jobs.ModelDeploymentMonitoringJob.create')], 'timeout': 3600.0}

@functools.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
try:
> return callable_(*args, **kwargs)

.nox/system-3-10/lib/python3.10/site-packages/google/api_core/grpc_helpers.py:76:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:946: in __call__
return _end_unary_response_blocking(state, call, False, None)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

state =
call =
with_call = False, deadline = None

def _end_unary_response_blocking(state, call, with_call, deadline):
if state.code is grpc.StatusCode.OK:
if with_call:
rendezvous = _MultiThreadedRendezvous(state, call, None, deadline)
return state.response, rendezvous
else:
return state.response
else:
> raise _InactiveRpcError(state)
E grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
E status = StatusCode.PERMISSION_DENIED
E details = "Vertex AI Service Agent service-580378083368@gcp-sa-aiplatform.iam.gserviceaccount.com does not have the requisite access to BigQuery [bq://mco-mm.bqmlga4.train]. Ensure that the service account has been granted the bigquery.tables.get permission and try again."
E debug_error_string = "UNKNOWN:Error received from peer ipv4:74.125.195.95:443 {grpc_message:"Vertex AI Service Agent service-580378083368@gcp-sa-aiplatform.iam.gserviceaccount.com does not have the requisite access to BigQuery [bq://mco-mm.bqmlga4.train]. Ensure that the service account has been granted the bigquery.tables.get permission and try again.", grpc_status:7, created_time:"2024-11-20T01:16:49.384713392+00:00"}"
E >

.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:849: _InactiveRpcError

The above exception was the direct cause of the following exception:

self =
shared_state = {'resources': [
resource name: projects/580378083368/locations/us-central1/endpoints/8324324468566523904]}

def test_mdm_two_models_one_valid_config(self, shared_state):
"""
Enable model monitoring on two existing models deployed to the same endpoint.
"""
assert len(shared_state["resources"]) == 1
self.endpoint = shared_state["resources"][0]
aiplatform.init(project=e2e_base._PROJECT, location=e2e_base._LOCATION)
# test model monitoring configurations
> job = aiplatform.ModelDeploymentMonitoringJob.create(
display_name=self._make_display_name(key=JOB_NAME),
logging_sampling_strategy=sampling_strategy,
schedule_config=schedule_config,
alert_config=email_alert_config,
objective_configs=objective_config,
create_request_timeout=3600,
project=e2e_base._PROJECT,
location=e2e_base._LOCATION,
endpoint=self.endpoint,
)

tests/system/aiplatform/test_model_monitoring.py:157:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
google/cloud/aiplatform/jobs.py:3479: in create
self._gca_resource = self.api_client.create_model_deployment_monitoring_job(
google/cloud/aiplatform_v1/services/job_service/client.py:4347: in create_model_deployment_monitoring_job
response = rpc(
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/gapic_v1/method.py:131: in __call__
return wrapped_func(*args, **kwargs)
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/timeout.py:120: in func_with_timeout
return func(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

args = (parent: "projects/ucaip-sample-tests/locations/us-central1"
model_deployment_monitoring_job {
display_name: "temp_e...ocacyorg.joonix.net"
}
enable_logging: true
}
sample_predict_instance {
null_value: NULL_VALUE
}
}
,)
kwargs = {'metadata': [('x-goog-request-params', 'parent=projects/ucaip-sample-tests/locations/us-central1'), ('x-goog-api-clie...0+top_google_constructor_method+google.cloud.aiplatform.jobs.ModelDeploymentMonitoringJob.create')], 'timeout': 3600.0}

@functools.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
try:
return callable_(*args, **kwargs)
except grpc.RpcError as exc:
> raise exceptions.from_grpc_error(exc) from exc
E google.api_core.exceptions.PermissionDenied: 403 Vertex AI Service Agent service-580378083368@gcp-sa-aiplatform.iam.gserviceaccount.com does not have the requisite access to BigQuery [bq://mco-mm.bqmlga4.train]. Ensure that the service account has been granted the bigquery.tables.get permission and try again.

.nox/system-3-10/lib/python3.10/site-packages/google/api_core/grpc_helpers.py:78: PermissionDenied
------------------------------ Captured log call -------------------------------
INFO google.cloud.aiplatform.jobs:base.py:85 Creating ModelDeploymentMonitoringJob
_____ TestModelDeploymentMonitoring.test_mdm_two_models_two_valid_configs ______
[gw5] linux -- Python 3.10.15 /tmpfs/src/github/python-aiplatform/.nox/system-3-10/bin/python

args = (parent: "projects/ucaip-sample-tests/locations/us-central1"
model_deployment_monitoring_job {
display_name: "temp_e...ocacyorg.joonix.net"
}
enable_logging: true
}
sample_predict_instance {
null_value: NULL_VALUE
}
}
,)
kwargs = {'metadata': [('x-goog-request-params', 'parent=projects/ucaip-sample-tests/locations/us-central1'), ('x-goog-api-clie...0+top_google_constructor_method+google.cloud.aiplatform.jobs.ModelDeploymentMonitoringJob.create')], 'timeout': 3600.0}

@functools.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
try:
> return callable_(*args, **kwargs)

.nox/system-3-10/lib/python3.10/site-packages/google/api_core/grpc_helpers.py:76:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:946: in __call__
return _end_unary_response_blocking(state, call, False, None)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

state =
call =
with_call = False, deadline = None

def _end_unary_response_blocking(state, call, with_call, deadline):
if state.code is grpc.StatusCode.OK:
if with_call:
rendezvous = _MultiThreadedRendezvous(state, call, None, deadline)
return state.response, rendezvous
else:
return state.response
else:
> raise _InactiveRpcError(state)
E grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
E status = StatusCode.PERMISSION_DENIED
E details = "Vertex AI Service Agent service-580378083368@gcp-sa-aiplatform.iam.gserviceaccount.com does not have the requisite access to BigQuery [bq://mco-mm.bqmlga4.train]. Ensure that the service account has been granted the bigquery.tables.get permission and try again."
E debug_error_string = "UNKNOWN:Error received from peer ipv4:74.125.20.95:443 {grpc_message:"Vertex AI Service Agent service-580378083368@gcp-sa-aiplatform.iam.gserviceaccount.com does not have the requisite access to BigQuery [bq://mco-mm.bqmlga4.train]. Ensure that the service account has been granted the bigquery.tables.get permission and try again.", grpc_status:7, created_time:"2024-11-20T01:16:50.705246011+00:00"}"
E >

.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:849: _InactiveRpcError

The above exception was the direct cause of the following exception:

self =
shared_state = {'resources': [
resource name: projects/580378083368/locations/us-central1/endpoints/8324324468566523904]}

def test_mdm_two_models_two_valid_configs(self, shared_state):
assert len(shared_state["resources"]) == 1
self.endpoint = shared_state["resources"][0]
aiplatform.init(project=e2e_base._PROJECT, location=e2e_base._LOCATION)
[deployed_model1, deployed_model2] = list(
map(lambda x: x.id, self.endpoint.list_models())
)
all_configs = {
deployed_model1: objective_config,
deployed_model2: objective_config2,
}
> job = aiplatform.ModelDeploymentMonitoringJob.create(
display_name=self._make_display_name(key=JOB_NAME),
logging_sampling_strategy=sampling_strategy,
schedule_config=schedule_config,
alert_config=email_alert_config,
objective_configs=all_configs,
create_request_timeout=3600,
project=e2e_base._PROJECT,
location=e2e_base._LOCATION,
endpoint=self.endpoint,
)

tests/system/aiplatform/test_model_monitoring.py:292:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
google/cloud/aiplatform/jobs.py:3479: in create
self._gca_resource = self.api_client.create_model_deployment_monitoring_job(
google/cloud/aiplatform_v1/services/job_service/client.py:4347: in create_model_deployment_monitoring_job
response = rpc(
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/gapic_v1/method.py:131: in __call__
return wrapped_func(*args, **kwargs)
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/timeout.py:120: in func_with_timeout
return func(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

args = (parent: "projects/ucaip-sample-tests/locations/us-central1"
model_deployment_monitoring_job {
display_name: "temp_e...ocacyorg.joonix.net"
}
enable_logging: true
}
sample_predict_instance {
null_value: NULL_VALUE
}
}
,)
kwargs = {'metadata': [('x-goog-request-params', 'parent=projects/ucaip-sample-tests/locations/us-central1'), ('x-goog-api-clie...0+top_google_constructor_method+google.cloud.aiplatform.jobs.ModelDeploymentMonitoringJob.create')], 'timeout': 3600.0}

@functools.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
try:
return callable_(*args, **kwargs)
except grpc.RpcError as exc:
> raise exceptions.from_grpc_error(exc) from exc
E google.api_core.exceptions.PermissionDenied: 403 Vertex AI Service Agent service-580378083368@gcp-sa-aiplatform.iam.gserviceaccount.com does not have the requisite access to BigQuery [bq://mco-mm.bqmlga4.train]. Ensure that the service account has been granted the bigquery.tables.get permission and try again.

.nox/system-3-10/lib/python3.10/site-packages/google/api_core/grpc_helpers.py:78: PermissionDenied
------------------------------ Captured log call -------------------------------
INFO google.cloud.aiplatform.jobs:base.py:85 Creating ModelDeploymentMonitoringJob
___ TestModelDeploymentMonitoring.test_mdm_notification_channel_alert_config ___
[gw5] linux -- Python 3.10.15 /tmpfs/src/github/python-aiplatform/.nox/system-3-10/bin/python

args = (parent: "projects/ucaip-sample-tests/locations/us-central1"
model_deployment_monitoring_job {
display_name: "temp_e...le-tests/notificationChannels/11578134490450491958"
}
sample_predict_instance {
null_value: NULL_VALUE
}
}
,)
kwargs = {'metadata': [('x-goog-request-params', 'parent=projects/ucaip-sample-tests/locations/us-central1'), ('x-goog-api-clie...0+top_google_constructor_method+google.cloud.aiplatform.jobs.ModelDeploymentMonitoringJob.create')], 'timeout': 3600.0}

@functools.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
try:
> return callable_(*args, **kwargs)

.nox/system-3-10/lib/python3.10/site-packages/google/api_core/grpc_helpers.py:76:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:946: in __call__
return _end_unary_response_blocking(state, call, False, None)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

state =
call =
with_call = False, deadline = None

def _end_unary_response_blocking(state, call, with_call, deadline):
if state.code is grpc.StatusCode.OK:
if with_call:
rendezvous = _MultiThreadedRendezvous(state, call, None, deadline)
return state.response, rendezvous
else:
return state.response
else:
> raise _InactiveRpcError(state)
E grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
E status = StatusCode.PERMISSION_DENIED
E details = "Vertex AI Service Agent service-580378083368@gcp-sa-aiplatform.iam.gserviceaccount.com does not have the requisite access to BigQuery [bq://mco-mm.bqmlga4.train]. Ensure that the service account has been granted the bigquery.tables.get permission and try again."
E debug_error_string = "UNKNOWN:Error received from peer ipv4:74.125.20.95:443 {grpc_message:"Vertex AI Service Agent service-580378083368@gcp-sa-aiplatform.iam.gserviceaccount.com does not have the requisite access to BigQuery [bq://mco-mm.bqmlga4.train]. Ensure that the service account has been granted the bigquery.tables.get permission and try again.", grpc_status:7, created_time:"2024-11-20T01:16:52.89606453+00:00"}"
E >

.nox/system-3-10/lib/python3.10/site-packages/grpc/_channel.py:849: _InactiveRpcError

The above exception was the direct cause of the following exception:

self =
shared_state = {'resources': [
resource name: projects/580378083368/locations/us-central1/endpoints/8324324468566523904]}

def test_mdm_notification_channel_alert_config(self, shared_state):
self.endpoint = shared_state["resources"][0]
aiplatform.init(project=e2e_base._PROJECT, location=e2e_base._LOCATION)
# Reset objective_config.explanation_config
objective_config.explanation_config = None
# test model monitoring configurations
> job = aiplatform.ModelDeploymentMonitoringJob.create(
display_name=self._make_display_name(key=JOB_NAME),
logging_sampling_strategy=sampling_strategy,
schedule_config=schedule_config,
alert_config=alert_config,
objective_configs=objective_config,
create_request_timeout=3600,
project=e2e_base._PROJECT,
location=e2e_base._LOCATION,
endpoint=self.endpoint,
)

tests/system/aiplatform/test_model_monitoring.py:418:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
google/cloud/aiplatform/jobs.py:3479: in create
self._gca_resource = self.api_client.create_model_deployment_monitoring_job(
google/cloud/aiplatform_v1/services/job_service/client.py:4347: in create_model_deployment_monitoring_job
response = rpc(
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/gapic_v1/method.py:131: in __call__
return wrapped_func(*args, **kwargs)
.nox/system-3-10/lib/python3.10/site-packages/google/api_core/timeout.py:120: in func_with_timeout
return func(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

args = (parent: "projects/ucaip-sample-tests/locations/us-central1"
model_deployment_monitoring_job {
display_name: "temp_e...le-tests/notificationChannels/11578134490450491958"
}
sample_predict_instance {
null_value: NULL_VALUE
}
}
,)
kwargs = {'metadata': [('x-goog-request-params', 'parent=projects/ucaip-sample-tests/locations/us-central1'), ('x-goog-api-clie...0+top_google_constructor_method+google.cloud.aiplatform.jobs.ModelDeploymentMonitoringJob.create')], 'timeout': 3600.0}

@functools.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
try:
return callable_(*args, **kwargs)
except grpc.RpcError as exc:
> raise exceptions.from_grpc_error(exc) from exc
E google.api_core.exceptions.PermissionDenied: 403 Vertex AI Service Agent service-580378083368@gcp-sa-aiplatform.iam.gserviceaccount.com does not have the requisite access to BigQuery [bq://mco-mm.bqmlga4.train]. Ensure that the service account has been granted the bigquery.tables.get permission and try again.

.nox/system-3-10/lib/python3.10/site-packages/google/api_core/grpc_helpers.py:78: PermissionDenied
------------------------------ Captured log call -------------------------------
INFO google.cloud.aiplatform.jobs:base.py:85 Creating ModelDeploymentMonitoringJob
---------------------------- Captured log teardown -----------------------------
INFO google.cloud.aiplatform.models:base.py:189 Undeploying Endpoint model: projects/580378083368/locations/us-central1/endpoints/8324324468566523904
INFO google.cloud.aiplatform.models:base.py:209 Undeploy Endpoint model backing LRO: projects/580378083368/locations/us-central1/endpoints/8324324468566523904/operations/6245569540072144896
INFO google.cloud.aiplatform.models:base.py:222 Endpoint model undeployed. Resource name: projects/580378083368/locations/us-central1/endpoints/8324324468566523904
INFO google.cloud.aiplatform.models:base.py:189 Undeploying Endpoint model: projects/580378083368/locations/us-central1/endpoints/8324324468566523904
INFO google.cloud.aiplatform.models:base.py:209 Undeploy Endpoint model backing LRO: projects/580378083368/locations/us-central1/endpoints/8324324468566523904/operations/5009331442358943744
INFO google.cloud.aiplatform.models:base.py:222 Endpoint model undeployed. Resource name: projects/580378083368/locations/us-central1/endpoints/8324324468566523904
INFO google.cloud.aiplatform.base:base.py:189 Deleting Endpoint : projects/580378083368/locations/us-central1/endpoints/8324324468566523904
INFO google.cloud.aiplatform.base:base.py:222 Endpoint deleted. . Resource name: projects/580378083368/locations/us-central1/endpoints/8324324468566523904
INFO google.cloud.aiplatform.base:base.py:156 Deleting Endpoint resource: projects/580378083368/locations/us-central1/endpoints/8324324468566523904
INFO google.cloud.aiplatform.base:base.py:161 Delete Endpoint backing LRO: projects/580378083368/locations/us-central1/operations/3111064199422279680
INFO google.cloud.aiplatform.base:base.py:174 Endpoint resource projects/580378083368/locations/us-central1/endpoints/8324324468566523904 deleted.
____________ TestPersistentResource.test_create_persistent_resource ____________
[gw9] linux -- Python 3.10.15 /tmpfs/src/github/python-aiplatform/.nox/system-3-10/bin/python

self =
shared_state = {}

def test_create_persistent_resource(self, shared_state):
# PersistentResource ID must be shorter than 64 characters.
# IE: "test-pr-e2e-ea3ae19d-3d94-4818-8ecd-1a7a63d7418c"
resource_id = self._make_display_name("")
resource_pools = [
gca_persistent_resource.ResourcePool(
machine_spec=gca_machine_resources.MachineSpec(
machine_type=_TEST_MACHINE_TYPE,
),
replica_count=_TEST_INITIAL_REPLICA_COUNT,
)
]

> test_resource = persistent_resource.PersistentResource.create(
persistent_resource_id=resource_id, resource_pools=resource_pools
)

tests/system/aiplatform/test_persistent_resource.py:61:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
google/cloud/aiplatform/base.py:863: in wrapper
return method(*args, **kwargs)
google/cloud/aiplatform/persistent_resource.py:320: in create
create_lro.result(timeout=None)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self =
timeout = None, retry = None, polling = None

def result(self, timeout=_DEFAULT_VALUE, retry=None, polling=None):
"""Get the result of the operation.

This method will poll for operation status periodically, blocking if
necessary. If you just want to make sure that this method does not block
for more than X seconds and you do not care about the nitty-gritty of
how this method operates, just call it with ``result(timeout=X)``. The
other parameters are for advanced use only.

Every call to this method is controlled by the following three
parameters, each of which has a specific, distinct role, even though all three
may look very similar: ``timeout``, ``retry`` and ``polling``. In most
cases users do not need to specify any custom values for any of these
parameters and may simply rely on default ones instead.

If you choose to specify custom parameters, please make sure you've
read the documentation below carefully.

First, please check :class:`google.api_core.retry.Retry`
class documentation for the proper definition of timeout and deadline
terms and for the definition the three different types of timeouts.
This class operates in terms of Retry Timeout and Polling Timeout. It
does not let customizing RPC timeout and the user is expected to rely on
default behavior for it.

The roles of each argument of this method are as follows:

``timeout`` (int): (Optional) The Polling Timeout as defined in
:class:`google.api_core.retry.Retry`. If the operation does not complete
within this timeout an exception will be thrown. This parameter affects
neither Retry Timeout nor RPC Timeout.

``retry`` (google.api_core.retry.Retry): (Optional) How to retry the
polling RPC. The ``retry.timeout`` property of this parameter is the
Retry Timeout as defined in :class:`google.api_core.retry.Retry`.
This parameter defines ONLY how the polling RPC call is retried
(i.e. what to do if the RPC we used for polling returned an error). It
does NOT define how the polling is done (i.e. how frequently and for
how long to call the polling RPC); use the ``polling`` parameter for that.
If a polling RPC throws and error and retrying it fails, the whole
future fails with the corresponding exception. If you want to tune which
server response error codes are not fatal for operation polling, use this
parameter to control that (``retry.predicate`` in particular).

``polling`` (google.api_core.retry.Retry): (Optional) How often and
for how long to call the polling RPC periodically (i.e. what to do if
a polling rpc returned successfully but its returned result indicates
that the long running operation is not completed yet, so we need to
check it again at some point in future). This parameter does NOT define
how to retry each individual polling RPC in case of an error; use the
``retry`` parameter for that. The ``polling.timeout`` of this parameter
is Polling Timeout as defined in as defined in
:class:`google.api_core.retry.Retry`.

For each of the arguments, there are also default values in place, which
will be used if a user does not specify their own. The default values
for the three parameters are not to be confused with the default values
for the corresponding arguments in this method (those serve as "not set"
markers for the resolution logic).

If ``timeout`` is provided (i.e.``timeout is not _DEFAULT VALUE``; note
the ``None`` value means "infinite timeout"), it will be used to control
the actual Polling Timeout. Otherwise, the ``polling.timeout`` value
will be used instead (see below for how the ``polling`` config itself
gets resolved). In other words, this parameter effectively overrides
the ``polling.timeout`` value if specified. This is so to preserve
backward compatibility.

If ``retry`` is provided (i.e. ``retry is not None``) it will be used to
control retry behavior for the polling RPC and the ``retry.timeout``
will determine the Retry Timeout. If not provided, the
polling RPC will be called with whichever default retry config was
specified for the polling RPC at the moment of the construction of the
polling RPC's client. For example, if the polling RPC is
``operations_client.get_operation()``, the ``retry`` parameter will be
controlling its retry behavior (not polling behavior) and, if not
specified, that specific method (``operations_client.get_operation()``)
will be retried according to the default retry config provided during
creation of ``operations_client`` client instead. This argument exists
mainly for backward compatibility; users are very unlikely to ever need
to set this parameter explicitly.

If ``polling`` is provided (i.e. ``polling is not None``), it will be used
to control the overall polling behavior and ``polling.timeout`` will
control Polling Timeout unless it is overridden by ``timeout`` parameter
as described above. If not provided, the``polling`` parameter specified
during construction of this future (the ``polling`` argument in the
constructor) will be used instead. Note: since the ``timeout`` argument may
override ``polling.timeout`` value, this parameter should be viewed as
coupled with the ``timeout`` parameter as described above.

Args:
timeout (int): (Optional) How long (in seconds) to wait for the
operation to complete. If None, wait indefinitely.
retry (google.api_core.retry.Retry): (Optional) How to retry the
polling RPC. This defines ONLY how the polling RPC call is
retried (i.e. what to do if the RPC we used for polling returned
an error). It does NOT define how the polling is done (i.e. how
frequently and for how long to call the polling RPC).
polling (google.api_core.retry.Retry): (Optional) How often and
for how long to call polling RPC periodically. This parameter
does NOT define how to retry each individual polling RPC call
(use the ``retry`` parameter for that).

Returns:
google.protobuf.Message: The Operation's result.

Raises:
google.api_core.GoogleAPICallError: If the operation errors or if
the timeout is reached before the operation completes.
"""

self._blocking_poll(timeout=timeout, retry=retry, polling=polling)

if self._exception is not None:
# pylint: disable=raising-bad-type
# Pylint doesn't recognize that this is valid in this case.
> raise self._exception
E google.api_core.exceptions.InternalServerError: 500 An internal error occurred on your cluster. Please try recreating one in a few minutes. If you still experience errors, contact Cloud AI Platform. 13: An internal error occurred on your cluster. Please try recreating one in a few minutes. If you still experience errors, contact Cloud AI Platform.

.nox/system-3-10/lib/python3.10/site-packages/google/api_core/future/polling.py:261: InternalServerError
------------------------------ Captured log call -------------------------------
INFO google.cloud.aiplatform.persistent_resource:base.py:85 Creating PersistentResource
INFO google.cloud.aiplatform.persistent_resource:base.py:88 Create PersistentResource backing LRO: projects/580378083368/locations/us-central1/persistentResources/test-pr-e2e--612dc999-425b-48c5-b146-f96e1708a26e/operations/2658452436871544832
=============================== warnings summary ===============================
.nox/system-3-10/lib/python3.10/site-packages/google/cloud/storage/_http.py:19: 16 warnings
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/google/cloud/storage/_http.py:19: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html
import pkg_resources

.nox/system-3-10/lib/python3.10/site-packages/pkg_resources/__init__.py:2832: 32 warnings
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pkg_resources/__init__.py:2832: DeprecationWarning: Deprecated call to `pkg_resources.declare_namespace('google')`.
Implementing implicit namespace packages (as specified in PEP 420) is preferred to `pkg_resources.declare_namespace`. See https://setuptools.pypa.io/en/latest/references/keywords.html#keyword-namespace-packages
declare_namespace(pkg)

.nox/system-3-10/lib/python3.10/site-packages/pkg_resources/__init__.py:2832: 32 warnings
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pkg_resources/__init__.py:2832: DeprecationWarning: Deprecated call to `pkg_resources.declare_namespace('google.cloud')`.
Implementing implicit namespace packages (as specified in PEP 420) is preferred to `pkg_resources.declare_namespace`. See https://setuptools.pypa.io/en/latest/references/keywords.html#keyword-namespace-packages
declare_namespace(pkg)

.nox/system-3-10/lib/python3.10/site-packages/pkg_resources/__init__.py:2317: 16 warnings
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pkg_resources/__init__.py:2317: DeprecationWarning: Deprecated call to `pkg_resources.declare_namespace('google')`.
Implementing implicit namespace packages (as specified in PEP 420) is preferred to `pkg_resources.declare_namespace`. See https://setuptools.pypa.io/en/latest/references/keywords.html#keyword-namespace-packages
declare_namespace(parent)

tests/system/aiplatform/test_experiments.py: 38 warnings
tests/system/aiplatform/test_autologging.py: 5 warnings
tests/system/aiplatform/test_custom_job.py: 2 warnings
tests/system/aiplatform/test_model_evaluation.py: 2 warnings
/tmpfs/src/github/python-aiplatform/google/cloud/aiplatform/utils/_ipython_utils.py:145: DeprecationWarning: Importing display from IPython.core.display is deprecated since IPython 7.14, please import from IPython display
from IPython.core.display import display

tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_text_generation_model_predict_async
tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_async[grpc-PROD_ENDPOINT]
tests/system/aiplatform/test_model_interactions.py::TestModelInteractions::test_endpoint_predict_async
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pytest_asyncio/plugin.py:783: DeprecationWarning: The event_loop fixture provided by pytest-asyncio has been redefined in
/tmpfs/src/github/python-aiplatform/tests/system/aiplatform/e2e_base.py:212
Replacing the event_loop fixture with a custom implementation is deprecated
and will lead to errors in the future.
If you want to request an asyncio event loop with a scope other than function
scope, use the "scope" argument to the asyncio mark when marking the tests.
If you want to return different types of event loops, use the event_loop_policy
fixture.

warnings.warn(

tests/system/aiplatform/test_experiment_model.py::TestExperimentModel::test_xgboost_booster_with_custom_uri
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/xgboost/core.py:158: UserWarning: [23:14:35] WARNING: /workspace/src/c_api/c_api.cc:1374: Saving model in the UBJSON format as default. You can use file extension: `json`, `ubj` or `deprecated` to choose between formats.
warnings.warn(smsg, UserWarning)

tests/system/aiplatform/test_experiment_model.py::TestExperimentModel::test_xgboost_xgbmodel_with_custom_names
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/xgboost/core.py:158: UserWarning: [23:14:40] WARNING: /workspace/src/c_api/c_api.cc:1374: Saving model in the UBJSON format as default. You can use file extension: `json`, `ubj` or `deprecated` to choose between formats.
warnings.warn(smsg, UserWarning)

tests/system/aiplatform/test_pipeline_job.py::TestPipelineJob::test_add_pipeline_job_to_experiment
tests/system/aiplatform/test_experiments.py::TestExperiments::test_add_pipeline_job_to_experiment
tests/system/aiplatform/test_pipeline_job_schedule.py::TestPipelineJobSchedule::test_create_get_pause_resume_update_list
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/kfp/dsl/component_decorator.py:126: FutureWarning: The default base_image used by the @dsl.component decorator will switch from 'python:3.9' to 'python:3.10' on Oct 1, 2025. To ensure your existing components work with versions of the KFP SDK released after that date, you should provide an explicit base_image argument and ensure your component works as intended on Python 3.10.
return component_factory.create_component_from_func(

tests/system/aiplatform/test_pipeline_job.py::TestPipelineJob::test_add_pipeline_job_to_experiment
tests/system/aiplatform/test_pipeline_job_schedule.py::TestPipelineJobSchedule::test_create_get_pause_resume_update_list
/tmpfs/src/github/python-aiplatform/google/cloud/aiplatform/pipeline_jobs.py:898: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
_LOGGER.warn(

tests/system/aiplatform/test_prediction_cpr.py::TestPredictionCpr::test_build_cpr_model_upload_and_deploy
tests/system/aiplatform/test_prediction_cpr.py::TestPredictionCpr::test_build_cpr_model_upload_and_deploy
/usr/local/lib/python3.10/subprocess.py:955: RuntimeWarning: line buffering (buffering=1) isn't supported in binary mode, the default buffer size will be used
self.stdin = io.open(p2cwrite, 'wb', bufsize)

tests/system/aiplatform/test_prediction_cpr.py::TestPredictionCpr::test_build_cpr_model_upload_and_deploy
tests/system/aiplatform/test_prediction_cpr.py::TestPredictionCpr::test_build_cpr_model_upload_and_deploy
/usr/local/lib/python3.10/subprocess.py:961: RuntimeWarning: line buffering (buffering=1) isn't supported in binary mode, the default buffer size will be used
self.stdout = io.open(c2pread, 'rb', bufsize)

tests/system/aiplatform/test_experiments.py::TestExperiments::test_add_pipeline_job_to_experiment
/tmpfs/src/github/python-aiplatform/tests/system/aiplatform/test_experiments.py:376: DeprecationWarning: The module `kfp.v2` is deprecated and will be removed in a futureversion. Please import directly from the `kfp` namespace, instead of `kfp.v2`.
import kfp.v2.dsl as dsl

tests/system/aiplatform/test_experiments.py::TestExperiments::test_add_pipeline_job_to_experiment
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/kfp/compiler/compiler.py:81: DeprecationWarning: Compiling to JSON is deprecated and will be removed in a future version. Please compile to a YAML file by providing a file path with a .yaml extension instead.
builder.write_pipeline_spec_to_file(

tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_batch_serve_to_df
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pyarrow/pandas_compat.py:735: DeprecationWarning: DatetimeTZBlock is deprecated and will be removed in a future version. Use public APIs instead.
klass=_int.DatetimeTZBlock,

tests/system/aiplatform/test_featurestore.py::TestFeaturestore::test_batch_serve_to_df
/tmpfs/src/github/python-aiplatform/.nox/system-3-10/lib/python3.10/site-packages/pandas/core/frame.py:717: DeprecationWarning: Passing a BlockManager to DataFrame is deprecated and will raise in a future version. Use public APIs instead.
warnings.warn(

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
- generated xml file: /tmpfs/src/github/python-aiplatform/system_3.10_sponge_log.xml -
=========================== short test summary info ============================
FAILED tests/system/aiplatform/test_autologging.py::TestAutologging::test_autologging_with_autorun_creation
FAILED tests/system/aiplatform/test_prediction_cpr.py::TestPredictionCpr::test_build_cpr_model_upload_and_deploy
FAILED tests/system/aiplatform/test_language_models.py::TestLanguageModels::test_code_chat_model_send_message_streaming[grpc]
FAILED tests/system/vertex_ray/test_job_submission_dashboard.py::TestJobSubmissionDashboard::test_job_submission_dashboard[2.9]
FAILED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_function_calling[grpc-PROD_ENDPOINT]
FAILED tests/system/vertexai/test_generative_models.py::TestGenerativeModels::test_generate_content_function_calling[rest-PROD_ENDPOINT]
FAILED tests/system/vertex_ray/test_cluster_management.py::TestClusterManagement::test_cluster_management[2.33]
FAILED tests/system/aiplatform/test_private_endpoint.py::TestPrivateEndpoint::test_create_deploy_delete_private_endpoint
FAILED tests/system/aiplatform/test_batch_prediction.py::TestBatchPredictionJob::test_model_monitoring
FAILED tests/system/aiplatform/test_e2e_tabular.py::TestEndToEndTabular::test_end_to_end_tabular
FAILED tests/system/aiplatform/test_e2e_forecasting.py::TestEndToEndForecasting3::test_end_to_end_forecasting[TemporalFusionTransformerForecastingTrainingJob]
FAILED tests/system/aiplatform/test_e2e_forecasting.py::TestEndToEndForecasting4::test_end_to_end_forecasting[TimeSeriesDenseEncoderForecastingTrainingJob]
FAILED tests/system/aiplatform/test_e2e_forecasting.py::TestEndToEndForecasting1::test_end_to_end_forecasting[AutoMLForecastingTrainingJob]
FAILED tests/system/aiplatform/test_e2e_forecasting.py::TestEndToEndForecasting2::test_end_to_end_forecasting[SequenceToSequencePlusForecastingTrainingJob]
FAILED tests/system/aiplatform/test_model_monitoring.py::TestModelDeploymentMonitoring::test_mdm_two_models_one_valid_config
FAILED tests/system/aiplatform/test_model_monitoring.py::TestModelDeploymentMonitoring::test_mdm_two_models_two_valid_configs
FAILED tests/system/aiplatform/test_model_monitoring.py::TestModelDeploymentMonitoring::test_mdm_notification_channel_alert_config
FAILED tests/system/aiplatform/test_persistent_resource.py::TestPersistentResource::test_create_persistent_resource
==== 18 failed, 220 passed, 6 skipped, 161 warnings in 12204.00s (3:23:23) =====
nox > Command py.test -v --junitxml=system_3.10_sponge_log.xml tests/system failed with exit code 1
nox > Session system-3.10 failed.
[FlakyBot] Sending logs to Flaky Bot...
[FlakyBot] See https://github.com/googleapis/repo-automation-bots/tree/main/packages/flakybot.
[FlakyBot] Published system_3.10_sponge_log.xml (13027750206284517)!
[FlakyBot] Done!
cleanup


[ID: 5338615] Command finished after 12536 secs, exit value: 1


Warning: Permanently added 'localhost' (ED25519) to the list of known hosts.
[18:36:26 PST] Collecting build artifacts from build VM
Build script failed with exit code: 1