Build/Test Explorer

TestFusion
Invocation status: Failed

Kokoro: cloud-devrel/client-libraries/python/googleapis/python-bigquery/presubmit/presubmit

1 target evaluated on for 13 min, 27 sec
by kokoro-github-subscriber
1 Failed

Showing build.log

Download
Warning: Permanently added 'localhost' (ED25519) to the list of known hosts.
Warning: Permanently added 'localhost' (ED25519) to the list of known hosts.
Warning: Permanently added 'localhost' (ED25519) to the list of known hosts.
[10:23:02 PDT] Transferring environment variable script to build VM
[10:23:03 PDT] Transferring kokoro_log_reader.py to build VM
[10:23:04 PDT] Transferring source code to build VM
[10:23:09 PDT] Executing build script on build VM



[ID: 8343082] Executing command via SSH:
export KOKORO_BUILD_NUMBER="5565"
export KOKORO_JOB_NAME="cloud-devrel/client-libraries/python/googleapis/python-bigquery/presubmit/presubmit"
source /tmpfs/kokoro-env_vars.sh; cd /tmpfs/src/ ; chmod 755 github/python-bigquery/.kokoro/trampoline.sh ; PYTHON_3_VERSION="$(pyenv which python3 2> /dev/null || which python3)" ; PYTHON_2_VERSION="$(pyenv which python2 2> /dev/null || which python2)" ; if "$PYTHON_3_VERSION" -c "import psutil" ; then KOKORO_PYTHON_COMMAND="$PYTHON_3_VERSION" ; else KOKORO_PYTHON_COMMAND="$PYTHON_2_VERSION" ; fi > /dev/null 2>&1 ; echo "export KOKORO_PYTHON_COMMAND="$KOKORO_PYTHON_COMMAND"" > "$HOME/.kokoro_python_vars" ; nohup bash -c "( rm -f /tmpfs/kokoro_build_exit_code ; github/python-bigquery/.kokoro/trampoline.sh ; echo \${PIPESTATUS[0]} > /tmpfs/kokoro_build_exit_code ) > /tmpfs/kokoro_build.log 2>&1" > /dev/null 2>&1 & echo $! > /tmpfs/kokoro_build.pid ; source "$HOME/.kokoro_python_vars" ; "$KOKORO_PYTHON_COMMAND" /tmpfs/kokoro_log_reader.py /tmpfs/kokoro_build.log /tmpfs/kokoro_build_exit_code /tmpfs/kokoro_build.pid /tmpfs/kokoro_log_reader.pid --start_byte 0

2024-10-22 10:23:11 Creating folder on disk for secrets: /tmpfs/src/gfile/secret_manager
Activated service account credentials for: [kokoro-trampoline@cloud-devrel-kokoro-resources.iam.gserviceaccount.com]
WARNING: Your config file at [/home/kbuilder/.docker/config.json] contains these credential helper entries:

{
"credHelpers": {
"gcr.io": "gcr",
"us.gcr.io": "gcr",
"asia.gcr.io": "gcr",
"staging-k8s.gcr.io": "gcr",
"eu.gcr.io": "gcr"
}
}
These will be overwritten.
Docker configuration file updated.
Using default tag: latest
latest: Pulling from cloud-devrel-kokoro-resources/python-multi
3823320faa42: Pulling fs layer
23ac9a5c1dfe: Pulling fs layer
bc7a2c397cfa: Pulling fs layer
87898e12b141: Pulling fs layer
f4d1b7342608: Pulling fs layer
f1c8d5dc7560: Pulling fs layer
e6c8fbd77018: Pulling fs layer
dbed42c455ad: Pulling fs layer
6cd6977f00db: Pulling fs layer
fc13eb8db7ff: Pulling fs layer
b4fc1d0f874f: Pulling fs layer
4a649199f450: Pulling fs layer
9f41fd30ca76: Pulling fs layer
ea2629b987bb: Pulling fs layer
88c32036c230: Pulling fs layer
79d40d4f590e: Pulling fs layer
3dc71f334336: Pulling fs layer
101083677445: Pulling fs layer
1c4c2c59ed24: Pulling fs layer
efb31af12c09: Pulling fs layer
1ff106e45e0b: Pulling fs layer
d9ecac98d7f2: Pulling fs layer
bb39b9ca8194: Pulling fs layer
433a7c929c2e: Pulling fs layer
87898e12b141: Waiting
f4d1b7342608: Waiting
f1c8d5dc7560: Waiting
e6c8fbd77018: Waiting
dbed42c455ad: Waiting
6cd6977f00db: Waiting
fc13eb8db7ff: Waiting
b4fc1d0f874f: Waiting
1c4c2c59ed24: Waiting
efb31af12c09: Waiting
4a649199f450: Waiting
9f41fd30ca76: Waiting
1ff106e45e0b: Waiting
d9ecac98d7f2: Waiting
ea2629b987bb: Waiting
bb39b9ca8194: Waiting
88c32036c230: Waiting
433a7c929c2e: Waiting
79d40d4f590e: Waiting
101083677445: Waiting
3dc71f334336: Waiting
bc7a2c397cfa: Verifying Checksum
bc7a2c397cfa: Download complete
3823320faa42: Verifying Checksum
3823320faa42: Download complete
87898e12b141: Verifying Checksum
87898e12b141: Download complete
f4d1b7342608: Verifying Checksum
f4d1b7342608: Download complete
f1c8d5dc7560: Verifying Checksum
f1c8d5dc7560: Download complete
dbed42c455ad: Verifying Checksum
dbed42c455ad: Download complete
3823320faa42: Pull complete
e6c8fbd77018: Verifying Checksum
e6c8fbd77018: Download complete
23ac9a5c1dfe: Download complete
b4fc1d0f874f: Verifying Checksum
b4fc1d0f874f: Download complete
fc13eb8db7ff: Verifying Checksum
fc13eb8db7ff: Download complete
4a649199f450: Verifying Checksum
4a649199f450: Download complete
9f41fd30ca76: Verifying Checksum
9f41fd30ca76: Download complete
ea2629b987bb: Verifying Checksum
ea2629b987bb: Download complete
88c32036c230: Verifying Checksum
88c32036c230: Download complete
79d40d4f590e: Verifying Checksum
79d40d4f590e: Download complete
3dc71f334336: Verifying Checksum
3dc71f334336: Download complete
101083677445: Download complete
1c4c2c59ed24: Verifying Checksum
1c4c2c59ed24: Download complete
efb31af12c09: Verifying Checksum
efb31af12c09: Download complete
1ff106e45e0b: Verifying Checksum
1ff106e45e0b: Download complete
6cd6977f00db: Verifying Checksum
6cd6977f00db: Download complete
bb39b9ca8194: Verifying Checksum
bb39b9ca8194: Download complete
433a7c929c2e: Verifying Checksum
433a7c929c2e: Download complete
d9ecac98d7f2: Verifying Checksum
d9ecac98d7f2: Download complete
23ac9a5c1dfe: Pull complete
bc7a2c397cfa: Pull complete
87898e12b141: Pull complete
f4d1b7342608: Pull complete
f1c8d5dc7560: Pull complete
e6c8fbd77018: Pull complete
dbed42c455ad: Pull complete
6cd6977f00db: Pull complete
fc13eb8db7ff: Pull complete
b4fc1d0f874f: Pull complete
4a649199f450: Pull complete
9f41fd30ca76: Pull complete
ea2629b987bb: Pull complete
88c32036c230: Pull complete
79d40d4f590e: Pull complete
3dc71f334336: Pull complete
101083677445: Pull complete
1c4c2c59ed24: Pull complete
efb31af12c09: Pull complete
1ff106e45e0b: Pull complete
d9ecac98d7f2: Pull complete
bb39b9ca8194: Pull complete
433a7c929c2e: Pull complete
Digest: sha256:9f0d8f0ba8ffb4eb97e8ec2c0beddd51ad027a39e3ca832dbe2e20eba2ae68ad
Status: Downloaded newer image for gcr.io/cloud-devrel-kokoro-resources/python-multi:latest
gcr.io/cloud-devrel-kokoro-resources/python-multi:latest
Executing: docker run --rm --interactive --network=host --privileged --volume=/var/run/docker.sock:/var/run/docker.sock --workdir=/tmpfs/src --entrypoint=github/python-bigquery/.kokoro/build.sh --env-file=/tmpfs/tmp/tmpe2zjk5y0/envfile --volume=/tmpfs:/tmpfs gcr.io/cloud-devrel-kokoro-resources/python-multi
KOKORO_KEYSTORE_DIR=/tmpfs/src/keystore
KOKORO_GITHUB_PULL_REQUEST_NUMBER=2045
KOKORO_GITHUB_PULL_REQUEST_URL=https://github.com/googleapis/python-bigquery/pull/2045
KOKORO_JOB_NAME=cloud-devrel/client-libraries/python/googleapis/python-bigquery/presubmit/presubmit
KOKORO_GIT_COMMIT=3bf9de77707c71e2b072670930bbc8225d6b24ea
KOKORO_JOB_CLUSTER=GCP_UBUNTU
KOKORO_GITHUB_PULL_REQUEST_COMMIT=3bf9de77707c71e2b072670930bbc8225d6b24ea
KOKORO_BLAZE_DIR=/tmpfs/src/objfs
KOKORO_ROOT=/tmpfs
KOKORO_GITHUB_PULL_REQUEST_TARGET_BRANCH=main
KOKORO_JOB_TYPE=PRESUBMIT_GITHUB
KOKORO_ROOT_DIR=/tmpfs/
KOKORO_BUILD_NUMBER=5565
KOKORO_JOB_POOL=yoshi-ubuntu
KOKORO_BUILD_INITIATOR=kokoro-github-subscriber
KOKORO_ARTIFACTS_DIR=/tmpfs/src
KOKORO_BUILD_ID=138d2e9d-4ca7-46fa-b321-0b6482043c8c
KOKORO_GFILE_DIR=/tmpfs/src/gfile
KOKORO_BUILD_CONFIG_DIR=
KOKORO_POSIX_ROOT=/tmpfs
KOKORO_BUILD_ARTIFACTS_SUBDIR=prod/cloud-devrel/client-libraries/python/googleapis/python-bigquery/presubmit/presubmit/5565/20241022-102217
nox > Running session unit_noextras-3.7
nox > Creating virtual environment (virtualenv) using python3.7 in .nox/unit_noextras-3-7
nox > python -m pip install pyarrow==1.0.0
nox > python -m pip install pytest google-cloud-testutils pytest-cov freezegun -c /tmpfs/src/github/python-bigquery/testing/constraints-3.7.txt
nox > python -m pip install -e . -c /tmpfs/src/github/python-bigquery/testing/constraints-3.7.txt
nox > Command python -m pip install -e . -c /tmpfs/src/github/python-bigquery/testing/constraints-3.7.txt failed with exit code 1:
Obtaining file:///tmpfs/src/github/python-bigquery
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Collecting google-api-core[grpc]@ git+https://github.com/googleapis/python-api-core.git@main (from google-cloud-bigquery==3.26.0)
Cloning https://github.com/googleapis/python-api-core.git (to revision main) to /tmp/pip-install-hv6z_jm9/google-api-core_5afdff393f99436681d1605bc4bbc584
Running command git clone --filter=blob:none --quiet https://github.com/googleapis/python-api-core.git /tmp/pip-install-hv6z_jm9/google-api-core_5afdff393f99436681d1605bc4bbc584
Resolved https://github.com/googleapis/python-api-core.git to commit 0d5ed37c96f9b40bccae98e228163a88abeb1763
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
INFO: pip is looking at multiple versions of google-cloud-bigquery to determine which version is compatible with other requirements. This could take a while.
ERROR: Could not find a version that satisfies the requirement google-api-core[grpc] 2.21.0 (from google-cloud-bigquery) (from versions: 0.1.0, 0.1.1, 0.1.2, 0.1.3, 0.1.4, 1.0.0, 1.1.0, 1.1.1, 1.1.2, 1.2.0, 1.2.1, 1.3.0, 1.4.0, 1.4.1, 1.5.0, 1.5.1, 1.5.2, 1.6.0a1, 1.6.0, 1.7.0, 1.8.0, 1.8.1, 1.8.2, 1.9.0, 1.10.0, 1.11.0, 1.11.1, 1.12.0, 1.13.0, 1.14.0, 1.14.1, 1.14.2, 1.14.3, 1.15.0, 1.16.0, 1.17.0, 1.18.0, 1.19.0, 1.19.1, 1.20.0, 1.20.1, 1.21.0, 1.22.0, 1.22.1, 1.22.2, 1.22.3, 1.22.4, 1.23.0, 1.24.0, 1.24.1, 1.25.0, 1.25.1, 1.26.0.dev0, 1.26.0, 1.26.1, 1.26.2, 1.26.3, 1.27.0, 1.28.0, 1.29.0, 1.30.0, 1.31.0, 1.31.1, 1.31.2, 1.31.3, 1.31.4, 1.31.5, 1.31.6, 1.32.0, 1.33.0b1, 1.33.0, 1.33.1, 1.33.2, 1.34.0, 1.34.1rc1, 1.34.1, 2.0.0b1, 2.0.0, 2.0.1, 2.1.0, 2.1.1, 2.2.0, 2.2.1, 2.2.2, 2.3.0, 2.3.1, 2.3.2, 2.4.0, 2.5.0, 2.6.0, 2.6.1, 2.7.0, 2.7.1, 2.7.2, 2.7.3, 2.8.0, 2.8.1, 2.8.2, 2.9.0, 2.10.0, 2.10.1, 2.10.2, 2.11.0rc1, 2.11.0, 2.11.1rc1, 2.11.1, 2.12.0.dev0, 2.12.0.dev1, 2.12.0rc1, 2.12.0, 2.13.0rc1, 2.13.0, 2.13.1, 2.14.0, 2.15.0rc1, 2.15.0, 2.16.0rc0, 2.16.0, 2.16.1, 2.16.2, 2.17.0rc0, 2.17.0, 2.17.1rc1, 2.17.1, 2.18.0rc0, 2.18.0, 2.19.0rc0, 2.19.0, 2.19.1rc0, 2.19.1, 2.19.2, 2.20.0rc0, 2.20.0, 2.20.1rc0, 2.21.0rc0, 2.21.0)
ERROR: No matching distribution found for google-api-core[grpc] 2.21.0

[notice] A new release of pip is available: 23.1.2 -> 24.0
[notice] To update, run: pip install --upgrade pip
nox > Session unit_noextras-3.7 failed.
nox > Running session unit_noextras-3.12
nox > Creating virtual environment (virtualenv) using python3.12 in .nox/unit_noextras-3-12
nox > python -m pip install pytest google-cloud-testutils pytest-cov freezegun -c /tmpfs/src/github/python-bigquery/testing/constraints-3.12.txt
nox > python -m pip install -e . -c /tmpfs/src/github/python-bigquery/testing/constraints-3.12.txt
nox > python -m pip freeze
cachetools==5.5.0
certifi==2024.8.30
charset-normalizer==3.4.0
click==8.1.7
coverage==7.6.4
freezegun==1.5.1
google-api-core @ git+https://github.com/googleapis/python-api-core.git@0d5ed37c96f9b40bccae98e228163a88abeb1763
google-auth==2.35.0
# Editable Git install with no remote (google-cloud-bigquery==3.26.0)
-e /tmpfs/src/github/python-bigquery
google-cloud-core==2.4.1
google-cloud-testutils==1.4.0
google-crc32c==1.6.0
google-resumable-media==2.7.2
googleapis-common-protos==1.65.0
grpcio==1.67.0
grpcio-status==1.67.0
idna==3.10
iniconfig==2.0.0
packaging==24.1
pluggy==1.5.0
proto-plus==1.24.0
protobuf==5.28.2
pyasn1==0.6.1
pyasn1_modules==0.4.1
pytest==8.3.3
pytest-cov==5.0.0
python-dateutil==2.9.0.post0
requests==2.32.3
rsa==4.9
six==1.16.0
urllib3==2.2.3
nox > py.test --quiet '-W default::PendingDeprecationWarning' --cov=google/cloud/bigquery --cov=tests/unit --cov-append --cov-config=.coveragerc --cov-report= --cov-fail-under=0 tests/unit
........................................................................ [ 3%]
..................................F.F................................... [ 6%]
........................................................................ [ 9%]
........................................................................ [ 12%]
........................................................F..F.F.FFFF.F... [ 15%]
........................................................................ [ 18%]
........................................................................ [ 22%]
........................................................................ [ 25%]
........................................................................ [ 28%]
........................................................................ [ 31%]
.......................................................s..ssssssssssssss [ 34%]
ssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssss [ 37%]
sssssssssssssssssssssssssssssssssssssssssss.s.....sss.sssss.ssssssssss.s [ 41%]
s.........................................F..F.......................... [ 44%]
...........ssss.s..........................ssss......................... [ 47%]
....F..............F..............................s..................... [ 50%]
.......ssssssssssssssssssssssssssss..................................F.. [ 53%]
........................................................................ [ 56%]
....................................................................s... [ 59%]
.............................................................s..s.sss.s. [ 63%]
...........................ssss......................................... [ 66%]
........................................................................ [ 69%]
............FF.........................................................s [ 72%]
sssssss................................................................. [ 75%]
........................................................................ [ 78%]
........................................................................ [ 82%]
........................................................................ [ 85%]
........................................................................ [ 88%]
........................................................................ [ 91%]
...............................s.ss.s.s...........s.s.................ss [ 94%]
sssssssssss.ssssss.sss.ssssssssssssssssssssssssssssssss.ss.............. [ 97%]
.............................................ssss [100%]
=================================== FAILURES ===================================
__________________ Test_AsyncJob.test_result_default_wo_state __________________

self =

def test_result_default_wo_state(self):
from google.cloud.bigquery.retry import DEFAULT_GET_JOB_TIMEOUT

begun_job_resource = _make_job_resource(
job_id=self.JOB_ID, project_id=self.PROJECT, location="US", started=True
)
done_job_resource = _make_job_resource(
job_id=self.JOB_ID,
project_id=self.PROJECT,
location="US",
started=True,
ended=True,
)
conn = make_connection(
_make_retriable_exception(),
begun_job_resource,
done_job_resource,
)
client = _make_client(project=self.PROJECT, connection=conn)
job = self._make_one(self.JOB_ID, client)

self.assertIs(job.result(retry=polling.DEFAULT_RETRY), job)

begin_call = mock.call(
method="POST",
path=f"/projects/{self.PROJECT}/jobs",
data={"jobReference": {"jobId": self.JOB_ID, "projectId": self.PROJECT}},
timeout=None,
)
reload_call = mock.call(
method="GET",
path=f"/projects/{self.PROJECT}/jobs/{self.JOB_ID}",
query_params={
"projection": "full",
"location": "US",
},
timeout=DEFAULT_GET_JOB_TIMEOUT,
)
> conn.api_request.assert_has_calls([begin_call, begin_call, reload_call])

tests/unit/job/test_base.py:1060:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self =
calls = [call(method='POST', path='/projects/test-project-123/jobs', data={'jobReference': {'jobId': 'job-id', 'projectId': 't...T', path='/projects/test-project-123/jobs/job-id', query_params={'projection': 'full', 'location': 'US'}, timeout=128)]
any_order = False

def assert_has_calls(self, calls, any_order=False):
"""assert the mock has been called with the specified calls.
The `mock_calls` list is checked for the calls.

If `any_order` is False (the default) then the calls must be
sequential. There can be extra calls before or after the
specified calls.

If `any_order` is True then the calls can be in any order, but
they must all appear in `mock_calls`."""
expected = [self._call_matcher(c) for c in calls]
cause = next((e for e in expected if isinstance(e, Exception)), None)
all_calls = _CallList(self._call_matcher(c) for c in self.mock_calls)
if not any_order:
if expected not in all_calls:
if cause is None:
problem = 'Calls not found.'
else:
problem = ('Error processing expected calls.\n'
'Errors: {}').format(
[e if isinstance(e, Exception) else None
for e in expected])
> raise AssertionError(
f'{problem}\n'
f'Expected: {_CallList(calls)}'
f'{self._calls_repr(prefix="Actual").rstrip(".")}'
) from cause
E AssertionError: Calls not found.
E Expected: [call(method='POST', path='/projects/test-project-123/jobs', data={'jobReference': {'jobId': 'job-id', 'projectId': 'test-project-123'}}, timeout=None),
E call(method='POST', path='/projects/test-project-123/jobs', data={'jobReference': {'jobId': 'job-id', 'projectId': 'test-project-123'}}, timeout=None),
E call(method='GET', path='/projects/test-project-123/jobs/job-id', query_params={'projection': 'full', 'location': 'US'}, timeout=128)]
E Actual: [call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='POST', path='/projects/test-project-123/jobs', data={'jobReference': {'jobId': 'job-id', 'projectId': 'test-project-123'}}, timeout=None),
E call(method='POST', path='/projects/test-project-123/jobs', data={'jobReference': {'jobId': 'job-id', 'projectId': 'test-project-123'}}, timeout=None),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/test-project-123/jobs/job-id', query_params={'projection': 'full', 'location': 'US'}, timeout=128)]

/usr/local/lib/python3.12/unittest/mock.py:981: AssertionError
__________________ Test_AsyncJob.test_result_w_retry_wo_state __________________

self =

def test_result_w_retry_wo_state(self):
from google.cloud.bigquery.retry import DEFAULT_GET_JOB_TIMEOUT

begun_job_resource = _make_job_resource(
job_id=self.JOB_ID, project_id=self.PROJECT, location="EU", started=True
)
done_job_resource = _make_job_resource(
job_id=self.JOB_ID,
project_id=self.PROJECT,
location="EU",
started=True,
ended=True,
)
conn = make_connection(
exceptions.NotFound("not normally retriable"),
begun_job_resource,
exceptions.NotFound("not normally retriable"),
done_job_resource,
)
client = _make_client(project=self.PROJECT, connection=conn)
job = self._make_one(
self._job_reference(self.JOB_ID, self.PROJECT, "EU"), client
)
custom_predicate = mock.Mock()
custom_predicate.return_value = True
custom_retry = google.api_core.retry.Retry(
predicate=custom_predicate,
initial=0.001,
maximum=0.001,
deadline=0.1,
)
self.assertIs(job.result(retry=custom_retry), job)

begin_call = mock.call(
method="POST",
path=f"/projects/{self.PROJECT}/jobs",
data={
"jobReference": {
"jobId": self.JOB_ID,
"projectId": self.PROJECT,
"location": "EU",
}
},
timeout=None,
)
reload_call = mock.call(
method="GET",
path=f"/projects/{self.PROJECT}/jobs/{self.JOB_ID}",
query_params={
"projection": "full",
"location": "EU",
},
timeout=DEFAULT_GET_JOB_TIMEOUT,
)
> conn.api_request.assert_has_calls(
[begin_call, begin_call, reload_call, reload_call]
)

tests/unit/job/test_base.py:1116:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self =
calls = [call(method='POST', path='/projects/test-project-123/jobs', data={'jobReference': {'jobId': 'job-id', 'projectId': 't...T', path='/projects/test-project-123/jobs/job-id', query_params={'projection': 'full', 'location': 'EU'}, timeout=128)]
any_order = False

def assert_has_calls(self, calls, any_order=False):
"""assert the mock has been called with the specified calls.
The `mock_calls` list is checked for the calls.

If `any_order` is False (the default) then the calls must be
sequential. There can be extra calls before or after the
specified calls.

If `any_order` is True then the calls can be in any order, but
they must all appear in `mock_calls`."""
expected = [self._call_matcher(c) for c in calls]
cause = next((e for e in expected if isinstance(e, Exception)), None)
all_calls = _CallList(self._call_matcher(c) for c in self.mock_calls)
if not any_order:
if expected not in all_calls:
if cause is None:
problem = 'Calls not found.'
else:
problem = ('Error processing expected calls.\n'
'Errors: {}').format(
[e if isinstance(e, Exception) else None
for e in expected])
> raise AssertionError(
f'{problem}\n'
f'Expected: {_CallList(calls)}'
f'{self._calls_repr(prefix="Actual").rstrip(".")}'
) from cause
E AssertionError: Calls not found.
E Expected: [call(method='POST', path='/projects/test-project-123/jobs', data={'jobReference': {'jobId': 'job-id', 'projectId': 'test-project-123', 'location': 'EU'}}, timeout=None),
E call(method='POST', path='/projects/test-project-123/jobs', data={'jobReference': {'jobId': 'job-id', 'projectId': 'test-project-123', 'location': 'EU'}}, timeout=None),
E call(method='GET', path='/projects/test-project-123/jobs/job-id', query_params={'projection': 'full', 'location': 'EU'}, timeout=128),
E call(method='GET', path='/projects/test-project-123/jobs/job-id', query_params={'projection': 'full', 'location': 'EU'}, timeout=128)]
E Actual: [call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='POST', path='/projects/test-project-123/jobs', data={'jobReference': {'jobId': 'job-id', 'projectId': 'test-project-123', 'location': 'EU'}}, timeout=None),
E call(method='POST', path='/projects/test-project-123/jobs', data={'jobReference': {'jobId': 'job-id', 'projectId': 'test-project-123', 'location': 'EU'}}, timeout=None),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/test-project-123/jobs/job-id', query_params={'projection': 'full', 'location': 'EU'}, timeout=128),
E call(method='GET', path='/projects/test-project-123/jobs/job-id', query_params={'projection': 'full', 'location': 'EU'}, timeout=128)]

/usr/local/lib/python3.12/unittest/mock.py:981: AssertionError
_______________ TestQueryJob.test_result_begin_job_if_not_exist ________________

self =

def test_result_begin_job_if_not_exist(self):
begun_resource = self._make_resource()
query_running_resource = {
"jobComplete": True,
"jobReference": {
"projectId": self.PROJECT,
"jobId": self.JOB_ID,
"location": "US",
},
"schema": {"fields": [{"name": "col1", "type": "STRING"}]},
"status": {"state": "RUNNING"},
}
query_done_resource = {
"jobComplete": True,
"jobReference": {
"projectId": self.PROJECT,
"jobId": self.JOB_ID,
"location": "US",
},
"schema": {"fields": [{"name": "col1", "type": "STRING"}]},
"status": {"state": "DONE"},
}
done_resource = copy.deepcopy(begun_resource)
done_resource["status"] = {"state": "DONE"}
connection = make_connection(
begun_resource,
query_running_resource,
query_done_resource,
done_resource,
)
client = _make_client(project=self.PROJECT, connection=connection)
job = self._make_one(self.JOB_ID, self.QUERY, client)
job._properties["jobReference"]["location"] = "US"

job.result()

create_job_call = mock.call(
method="POST",
path=f"/projects/{self.PROJECT}/jobs",
data={
"jobReference": {
"jobId": self.JOB_ID,
"projectId": self.PROJECT,
"location": "US",
},
"configuration": {
"query": {"useLegacySql": False, "query": self.QUERY},
},
},
timeout=None,
)
reload_call = mock.call(
method="GET",
path=f"/projects/{self.PROJECT}/jobs/{self.JOB_ID}",
query_params={"projection": "full", "location": "US"},
timeout=DEFAULT_GET_JOB_TIMEOUT,
)
get_query_results_call = mock.call(
method="GET",
path=f"/projects/{self.PROJECT}/queries/{self.JOB_ID}",
query_params={
"maxResults": 0,
"location": "US",
},
timeout=None,
)

> connection.api_request.assert_has_calls(
[
# Make sure we start a job that hasn't started yet. See:
# https://github.com/googleapis/python-bigquery/issues/1940
create_job_call,
reload_call,
get_query_results_call,
reload_call,
]
)

tests/unit/job/test_query.py:1109:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self =
calls = [call(method='POST', path='/projects/project/jobs', data={'jobReference': {'jobId': 'JOB_ID', 'projectId': 'project', ...ethod='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'US'}, timeout=128)]
any_order = False

def assert_has_calls(self, calls, any_order=False):
"""assert the mock has been called with the specified calls.
The `mock_calls` list is checked for the calls.

If `any_order` is False (the default) then the calls must be
sequential. There can be extra calls before or after the
specified calls.

If `any_order` is True then the calls can be in any order, but
they must all appear in `mock_calls`."""
expected = [self._call_matcher(c) for c in calls]
cause = next((e for e in expected if isinstance(e, Exception)), None)
all_calls = _CallList(self._call_matcher(c) for c in self.mock_calls)
if not any_order:
if expected not in all_calls:
if cause is None:
problem = 'Calls not found.'
else:
problem = ('Error processing expected calls.\n'
'Errors: {}').format(
[e if isinstance(e, Exception) else None
for e in expected])
> raise AssertionError(
f'{problem}\n'
f'Expected: {_CallList(calls)}'
f'{self._calls_repr(prefix="Actual").rstrip(".")}'
) from cause
E AssertionError: Calls not found.
E Expected: [call(method='POST', path='/projects/project/jobs', data={'jobReference': {'jobId': 'JOB_ID', 'projectId': 'project', 'location': 'US'}, 'configuration': {'query': {'useLegacySql': False, 'query': 'select count(*) from persons'}}}, timeout=None),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'US'}, timeout=128),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'US'}, timeout=None),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'US'}, timeout=128)]
E Actual: [call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='POST', path='/projects/project/jobs', data={'jobReference': {'jobId': 'JOB_ID', 'projectId': 'project', 'location': 'US'}, 'configuration': {'query': {'useLegacySql': False, 'query': 'select count(*) from persons'}}}, timeout=None),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'US'}, timeout=128),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'US'}, timeout=None),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'US'}, timeout=128)]

/usr/local/lib/python3.12/unittest/mock.py:981: AssertionError
____________ TestQueryJob.test_result_reloads_job_state_until_done _____________

self =

def test_result_reloads_job_state_until_done(self):
"""Verify that result() doesn't return until state == 'DONE'.

This test verifies correctness for a possible sequence of API responses
that might cause internal customer issue b/332850329.
"""
from google.cloud.bigquery.table import RowIterator

query_resource = {
"jobComplete": False,
"jobReference": {
"projectId": self.PROJECT,
"jobId": self.JOB_ID,
"location": "EU",
},
}
query_resource_done = {
"jobComplete": True,
"jobReference": {
"projectId": self.PROJECT,
"jobId": self.JOB_ID,
"location": "EU",
},
"schema": {"fields": [{"name": "col1", "type": "STRING"}]},
"totalRows": "2",
"queryId": "abc-def",
}
job_resource = self._make_resource(started=True, location="EU")
job_resource_done = self._make_resource(started=True, ended=True, location="EU")
job_resource_done["configuration"]["query"]["destinationTable"] = {
"projectId": "dest-project",
"datasetId": "dest_dataset",
"tableId": "dest_table",
}
query_page_resource = {
# Explicitly set totalRows to be different from the initial
# response to test update during iteration.
"totalRows": "1",
"pageToken": None,
"rows": [{"f": [{"v": "abc"}]}],
}
conn = make_connection(
# QueryJob.result() makes a pair of jobs.get & jobs.getQueryResults
# REST API calls each iteration to determine if the job has finished
# or not.
#
# jobs.get (https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs/get)
# is necessary to make sure the job has really finished via
# `Job.status.state == "DONE"` and to get necessary properties for
# `RowIterator` like the destination table.
#
# jobs.getQueryResults
# (https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs/getQueryResults)
# with maxResults == 0 is technically optional,
# but it hangs up to 10 seconds until the job has finished. This
# makes sure we can know when the query has finished as close as
# possible to when the query finishes. It also gets properties
# necessary for `RowIterator` that isn't available on the job
# resource such as the schema
# (https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs/getQueryResults#body.GetQueryResultsResponse.FIELDS.schema)
# of the results.
job_resource,
query_resource,
# The query wasn't finished in the last call to jobs.get, so try
# again with a call to both jobs.get & jobs.getQueryResults.
job_resource,
query_resource_done,
# Even though, the previous jobs.getQueryResults response says
# the job is complete, we haven't downloaded the full job status
# yet.
#
# Important: per internal issue 332850329, this reponse has
# `Job.status.state = "RUNNING"`. This ensures we are protected
# against possible eventual consistency issues where
# `jobs.getQueryResults` says jobComplete == True, but our next
# call to `jobs.get` still doesn't have
# `Job.status.state == "DONE"`.
job_resource,
# Try again until `Job.status.state == "DONE"`.
#
# Note: the call to `jobs.getQueryResults` is missing here as
# an optimization. We already received a "completed" response, so
# we won't learn anything new by calling that API again.
job_resource,
job_resource_done,
# When we iterate over the `RowIterator` we return from
# `QueryJob.result()`, we make additional calls to
# `jobs.getQueryResults` but this time allowing the actual rows
# to be returned as well.
query_page_resource,
)
client = _make_client(self.PROJECT, connection=conn)
job = self._get_target_class().from_api_repr(job_resource, client)

result = job.result()

self.assertIsInstance(result, RowIterator)
self.assertEqual(result.total_rows, 2)
rows = list(result)
self.assertEqual(len(rows), 1)
self.assertEqual(rows[0].col1, "abc")
self.assertEqual(result.job_id, self.JOB_ID)
self.assertEqual(result.location, "EU")
self.assertEqual(result.project, self.PROJECT)
self.assertEqual(result.query_id, "abc-def")
# Test that the total_rows property has changed during iteration, based
# on the response from tabledata.list.
self.assertEqual(result.total_rows, 1)

query_results_path = f"/projects/{self.PROJECT}/queries/{self.JOB_ID}"
query_results_call = mock.call(
method="GET",
path=query_results_path,
query_params={"maxResults": 0, "location": "EU"},
timeout=None,
)
reload_call = mock.call(
method="GET",
path=f"/projects/{self.PROJECT}/jobs/{self.JOB_ID}",
query_params={"projection": "full", "location": "EU"},
timeout=DEFAULT_GET_JOB_TIMEOUT,
)
query_page_call = mock.call(
method="GET",
path=query_results_path,
query_params={
"fields": _LIST_ROWS_FROM_QUERY_RESULTS_FIELDS,
"location": "EU",
"formatOptions.useInt64Timestamp": True,
},
timeout=None,
)
# Ensure that we actually made the expected API calls in the sequence
# we thought above at the make_connection() call above.
#
# Note: The responses from jobs.get and jobs.getQueryResults can be
# deceptively similar, so this check ensures we actually made the
# requests we expected.
> conn.api_request.assert_has_calls(
[
# jobs.get & jobs.getQueryResults because the job just started.
reload_call,
query_results_call,
# jobs.get & jobs.getQueryResults because the query is still
# running.
reload_call,
query_results_call,
# We got a jobComplete response from the most recent call to
# jobs.getQueryResults, so now call jobs.get until we get
# `Jobs.status.state == "DONE"`. This tests a fix for internal
# issue b/332850329.
reload_call,
reload_call,
reload_call,
# jobs.getQueryResults without `maxResults` set to download
# the rows as we iterate over the `RowIterator`.
query_page_call,
]
)

tests/unit/job/test_query.py:999:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self =
calls = [call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'EU'}, timeo...='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'EU'}, timeout=128), ...]
any_order = False

def assert_has_calls(self, calls, any_order=False):
"""assert the mock has been called with the specified calls.
The `mock_calls` list is checked for the calls.

If `any_order` is False (the default) then the calls must be
sequential. There can be extra calls before or after the
specified calls.

If `any_order` is True then the calls can be in any order, but
they must all appear in `mock_calls`."""
expected = [self._call_matcher(c) for c in calls]
cause = next((e for e in expected if isinstance(e, Exception)), None)
all_calls = _CallList(self._call_matcher(c) for c in self.mock_calls)
if not any_order:
if expected not in all_calls:
if cause is None:
problem = 'Calls not found.'
else:
problem = ('Error processing expected calls.\n'
'Errors: {}').format(
[e if isinstance(e, Exception) else None
for e in expected])
> raise AssertionError(
f'{problem}\n'
f'Expected: {_CallList(calls)}'
f'{self._calls_repr(prefix="Actual").rstrip(".")}'
) from cause
E AssertionError: Calls not found.
E Expected: [call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'EU'}, timeout=128),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'EU'}, timeout=None),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'EU'}, timeout=128),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'EU'}, timeout=None),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'EU'}, timeout=128),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'EU'}, timeout=128),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'EU'}, timeout=128),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'fields': 'jobReference,totalRows,pageToken,rows', 'location': 'EU', 'formatOptions.useInt64Timestamp': True}, timeout=None)]
E Actual: [call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'EU'}, timeout=128),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'EU'}, timeout=None),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'EU'}, timeout=128),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'EU'}, timeout=None),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'EU'}, timeout=128),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'EU'}, timeout=128),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'EU'}, timeout=128),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(timeout=None, method='GET', path='/projects/project/queries/JOB_ID', query_params={'fields': 'jobReference,totalRows,pageToken,rows', 'location': 'EU', 'formatOptions.useInt64Timestamp': True})]

/usr/local/lib/python3.12/unittest/mock.py:981: AssertionError
___________________ TestQueryJob.test_result_w_custom_retry ____________________

self =

def test_result_w_custom_retry(self):
from google.cloud.bigquery.table import RowIterator

query_resource = {
"jobComplete": False,
"jobReference": {"projectId": self.PROJECT, "jobId": self.JOB_ID},
}
query_resource_done = {
"jobComplete": True,
"jobReference": {"projectId": self.PROJECT, "jobId": self.JOB_ID},
"schema": {"fields": [{"name": "col1", "type": "STRING"}]},
"totalRows": "2",
}
job_resource = self._make_resource(started=True, location="asia-northeast1")
job_resource_done = self._make_resource(
started=True, ended=True, location="asia-northeast1"
)
job_resource_done["configuration"]["query"]["destinationTable"] = {
"projectId": "dest-project",
"datasetId": "dest_dataset",
"tableId": "dest_table",
}

connection = make_connection(
# Also, for each API request, raise an exception that we know can
# be retried. Because of this, for each iteration we do:
# jobs.get (x2) & jobs.getQueryResults (x2)
exceptions.NotFound("not normally retriable"),
job_resource,
exceptions.NotFound("not normally retriable"),
query_resource,
# Query still not done, repeat both.
exceptions.NotFound("not normally retriable"),
job_resource,
exceptions.NotFound("not normally retriable"),
query_resource,
exceptions.NotFound("not normally retriable"),
# Query still not done, repeat both.
job_resource_done,
exceptions.NotFound("not normally retriable"),
query_resource_done,
# Query finished!
)
client = _make_client(self.PROJECT, connection=connection)
job = self._get_target_class().from_api_repr(job_resource, client)

custom_predicate = mock.Mock()
custom_predicate.return_value = True
custom_retry = google.api_core.retry.Retry(
initial=0.001,
maximum=0.001,
multiplier=1.0,
deadline=0.1,
predicate=custom_predicate,
)

self.assertIsInstance(job.result(retry=custom_retry), RowIterator)
query_results_call = mock.call(
method="GET",
path=f"/projects/{self.PROJECT}/queries/{self.JOB_ID}",
query_params={"maxResults": 0, "location": "asia-northeast1"},
# TODO(tswast): Why do we end up setting timeout to
# google.cloud.bigquery.client._MIN_GET_QUERY_RESULTS_TIMEOUT in
# some cases but not others?
timeout=mock.ANY,
)
reload_call = mock.call(
method="GET",
path=f"/projects/{self.PROJECT}/jobs/{self.JOB_ID}",
query_params={"projection": "full", "location": "asia-northeast1"},
timeout=DEFAULT_GET_JOB_TIMEOUT,
)

> connection.api_request.assert_has_calls(
[
# See make_connection() call above for explanation of the
# expected API calls.
#
# Query not done.
reload_call,
reload_call,
query_results_call,
query_results_call,
# Query still not done.
reload_call,
reload_call,
query_results_call,
query_results_call,
# Query done!
reload_call,
reload_call,
query_results_call,
query_results_call,
]
)

tests/unit/job/test_query.py:1400:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self =
calls = [call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'asia-northe...'/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'asia-northeast1'}, timeout=128), ...]
any_order = False

def assert_has_calls(self, calls, any_order=False):
"""assert the mock has been called with the specified calls.
The `mock_calls` list is checked for the calls.

If `any_order` is False (the default) then the calls must be
sequential. There can be extra calls before or after the
specified calls.

If `any_order` is True then the calls can be in any order, but
they must all appear in `mock_calls`."""
expected = [self._call_matcher(c) for c in calls]
cause = next((e for e in expected if isinstance(e, Exception)), None)
all_calls = _CallList(self._call_matcher(c) for c in self.mock_calls)
if not any_order:
if expected not in all_calls:
if cause is None:
problem = 'Calls not found.'
else:
problem = ('Error processing expected calls.\n'
'Errors: {}').format(
[e if isinstance(e, Exception) else None
for e in expected])
> raise AssertionError(
f'{problem}\n'
f'Expected: {_CallList(calls)}'
f'{self._calls_repr(prefix="Actual").rstrip(".")}'
) from cause
E AssertionError: Calls not found.
E Expected: [call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'asia-northeast1'}, timeout=128),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'asia-northeast1'}, timeout=128),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'asia-northeast1'}, timeout=),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'asia-northeast1'}, timeout=),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'asia-northeast1'}, timeout=128),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'asia-northeast1'}, timeout=128),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'asia-northeast1'}, timeout=),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'asia-northeast1'}, timeout=),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'asia-northeast1'}, timeout=128),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'asia-northeast1'}, timeout=128),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'asia-northeast1'}, timeout=),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'asia-northeast1'}, timeout=)]
E Actual: [call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'asia-northeast1'}, timeout=128),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'asia-northeast1'}, timeout=128),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'asia-northeast1'}, timeout=None),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'asia-northeast1'}, timeout=None),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'asia-northeast1'}, timeout=128),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'asia-northeast1'}, timeout=128),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'asia-northeast1'}, timeout=None),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'asia-northeast1'}, timeout=None),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'asia-northeast1'}, timeout=128),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'asia-northeast1'}, timeout=128),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'asia-northeast1'}, timeout=None),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'asia-northeast1'}, timeout=None)]

/usr/local/lib/python3.12/unittest/mock.py:981: AssertionError
_____________________ TestQueryJob.test_result_w_page_size _____________________

self =

def test_result_w_page_size(self):
# Arrange
query_results_resource = {
"jobComplete": True,
"jobReference": {"projectId": self.PROJECT, "jobId": self.JOB_ID},
"schema": {"fields": [{"name": "col1", "type": "STRING"}]},
"totalRows": "10",
"rows": [
{"f": [{"v": "row1"}]},
{"f": [{"v": "row2"}]},
{"f": [{"v": "row3"}]},
{"f": [{"v": "row4"}]},
{"f": [{"v": "row5"}]},
{"f": [{"v": "row6"}]},
{"f": [{"v": "row7"}]},
{"f": [{"v": "row8"}]},
{"f": [{"v": "row9"}]},
],
"pageToken": "first-page-token",
}
job_resource_running = self._make_resource(
started=True, ended=False, location="US"
)
job_resource_done = self._make_resource(started=True, ended=True, location="US")
destination_table = {
"projectId": self.PROJECT,
"datasetId": self.DS_ID,
"tableId": self.TABLE_ID,
}
q_config = job_resource_done["configuration"]["query"]
q_config["destinationTable"] = destination_table
query_page_resource_2 = {"totalRows": 10, "rows": [{"f": [{"v": "row10"}]}]}
conn = make_connection(
job_resource_running,
query_results_resource,
job_resource_done,
query_page_resource_2,
)
client = _make_client(self.PROJECT, connection=conn)
job = self._get_target_class().from_api_repr(job_resource_running, client)

# Act
result = job.result(page_size=9)

# Assert
actual_rows = list(result)
self.assertEqual(len(actual_rows), 10)

jobs_get_path = f"/projects/{self.PROJECT}/jobs/{self.JOB_ID}"
jobs_get_call = mock.call(
method="GET",
path=jobs_get_path,
query_params={"projection": "full", "location": "US"},
timeout=DEFAULT_GET_JOB_TIMEOUT,
)
query_results_path = f"/projects/{self.PROJECT}/queries/{self.JOB_ID}"
query_page_waiting_call = mock.call(
method="GET",
path=query_results_path,
query_params={
# Waiting for the results should set maxResults and cache the
# first page if page_size is set. This allows customers to
# more finely tune when we fallback to the BQ Storage API.
# See internal issue: 344008814.
"maxResults": 9,
"location": "US",
"formatOptions.useInt64Timestamp": True,
},
timeout=None,
)
query_page_2_call = mock.call(
timeout=None,
method="GET",
path=query_results_path,
query_params={
"pageToken": "first-page-token",
"maxResults": 9,
"fields": _LIST_ROWS_FROM_QUERY_RESULTS_FIELDS,
"location": "US",
"formatOptions.useInt64Timestamp": True,
},
)
> conn.api_request.assert_has_calls(
[jobs_get_call, query_page_waiting_call, jobs_get_call, query_page_2_call]
)

tests/unit/job/test_query.py:1625:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self =
calls = [call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'US'}, timeo...ts': 9, 'fields': 'jobReference,totalRows,pageToken,rows', 'location': 'US', 'formatOptions.useInt64Timestamp': True})]
any_order = False

def assert_has_calls(self, calls, any_order=False):
"""assert the mock has been called with the specified calls.
The `mock_calls` list is checked for the calls.

If `any_order` is False (the default) then the calls must be
sequential. There can be extra calls before or after the
specified calls.

If `any_order` is True then the calls can be in any order, but
they must all appear in `mock_calls`."""
expected = [self._call_matcher(c) for c in calls]
cause = next((e for e in expected if isinstance(e, Exception)), None)
all_calls = _CallList(self._call_matcher(c) for c in self.mock_calls)
if not any_order:
if expected not in all_calls:
if cause is None:
problem = 'Calls not found.'
else:
problem = ('Error processing expected calls.\n'
'Errors: {}').format(
[e if isinstance(e, Exception) else None
for e in expected])
> raise AssertionError(
f'{problem}\n'
f'Expected: {_CallList(calls)}'
f'{self._calls_repr(prefix="Actual").rstrip(".")}'
) from cause
E AssertionError: Calls not found.
E Expected: [call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'US'}, timeout=128),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 9, 'location': 'US', 'formatOptions.useInt64Timestamp': True}, timeout=None),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'US'}, timeout=128),
E call(timeout=None, method='GET', path='/projects/project/queries/JOB_ID', query_params={'pageToken': 'first-page-token', 'maxResults': 9, 'fields': 'jobReference,totalRows,pageToken,rows', 'location': 'US', 'formatOptions.useInt64Timestamp': True})]
E Actual: [call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'US'}, timeout=128),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 9, 'formatOptions.useInt64Timestamp': True, 'location': 'US'}, timeout=None),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'US'}, timeout=128),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(timeout=None, method='GET', path='/projects/project/queries/JOB_ID', query_params={'pageToken': 'first-page-token', 'maxResults': 9, 'fields': 'jobReference,totalRows,pageToken,rows', 'location': 'US', 'formatOptions.useInt64Timestamp': True})]

/usr/local/lib/python3.12/unittest/mock.py:981: AssertionError
_______________ TestQueryJob.test_result_w_timeout_doesnt_raise ________________

self =

def test_result_w_timeout_doesnt_raise(self):
import google.cloud.bigquery.client

begun_resource = self._make_resource()
query_resource = {
"jobComplete": True,
"jobReference": {"projectId": self.PROJECT, "jobId": self.JOB_ID},
"schema": {"fields": [{"name": "col1", "type": "STRING"}]},
}
done_resource = copy.deepcopy(begun_resource)
done_resource["status"] = {"state": "DONE"}
connection = make_connection(begun_resource, query_resource, done_resource)
client = _make_client(project=self.PROJECT, connection=connection)
job = self._make_one(self.JOB_ID, self.QUERY, client)
job._properties["jobReference"]["location"] = "US"
job._properties["status"] = {"state": "RUNNING"}

with freezegun.freeze_time("1970-01-01 00:00:00", tick=False):
job.result(
# Test that fractional seconds are supported, but use a timeout
# that is representable as a floating point without rounding
# errors since it can be represented exactly in base 2. In this
# case 1.125 is 9 / 8, which is a fraction with a power of 2 in
# the denominator.
timeout=1.125,
)

reload_call = mock.call(
method="GET",
path=f"/projects/{self.PROJECT}/jobs/{self.JOB_ID}",
query_params={"projection": "full", "location": "US"},
timeout=1.125,
)
get_query_results_call = mock.call(
method="GET",
path=f"/projects/{self.PROJECT}/queries/{self.JOB_ID}",
query_params={
"maxResults": 0,
"location": "US",
},
timeout=google.cloud.bigquery.client._MIN_GET_QUERY_RESULTS_TIMEOUT,
)
> connection.api_request.assert_has_calls(
[
reload_call,
get_query_results_call,
reload_call,
]
)

tests/unit/job/test_query.py:1489:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self =
calls = [call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'US'}, timeo...hod='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'US'}, timeout=1.125)]
any_order = False

def assert_has_calls(self, calls, any_order=False):
"""assert the mock has been called with the specified calls.
The `mock_calls` list is checked for the calls.

If `any_order` is False (the default) then the calls must be
sequential. There can be extra calls before or after the
specified calls.

If `any_order` is True then the calls can be in any order, but
they must all appear in `mock_calls`."""
expected = [self._call_matcher(c) for c in calls]
cause = next((e for e in expected if isinstance(e, Exception)), None)
all_calls = _CallList(self._call_matcher(c) for c in self.mock_calls)
if not any_order:
if expected not in all_calls:
if cause is None:
problem = 'Calls not found.'
else:
problem = ('Error processing expected calls.\n'
'Errors: {}').format(
[e if isinstance(e, Exception) else None
for e in expected])
> raise AssertionError(
f'{problem}\n'
f'Expected: {_CallList(calls)}'
f'{self._calls_repr(prefix="Actual").rstrip(".")}'
) from cause
E AssertionError: Calls not found.
E Expected: [call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'US'}, timeout=1.125),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'US'}, timeout=120),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'US'}, timeout=1.125)]
E Actual: [call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'US'}, timeout=1.125),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'US'}, timeout=120),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'US'}, timeout=1.125)]

/usr/local/lib/python3.12/unittest/mock.py:981: AssertionError
_____ TestQueryJob.test_result_w_timeout_raises_concurrent_futures_timeout _____

self =

def test_result_w_timeout_raises_concurrent_futures_timeout(self):
import google.cloud.bigquery.client

begun_resource = self._make_resource()
begun_resource["jobReference"]["location"] = "US"
query_resource = {
"jobComplete": True,
"jobReference": {"projectId": self.PROJECT, "jobId": self.JOB_ID},
"schema": {"fields": [{"name": "col1", "type": "STRING"}]},
}
done_resource = copy.deepcopy(begun_resource)
done_resource["status"] = {"state": "DONE"}
connection = make_connection(begun_resource, query_resource, done_resource)
client = _make_client(project=self.PROJECT, connection=connection)
job = self._make_one(self.JOB_ID, self.QUERY, client)
job._properties["jobReference"]["location"] = "US"
job._properties["status"] = {"state": "RUNNING"}

with freezegun.freeze_time(
"1970-01-01 00:00:00", auto_tick_seconds=1.0
), self.assertRaises(concurrent.futures.TimeoutError):
job.result(timeout=1.125)

reload_call = mock.call(
method="GET",
path=f"/projects/{self.PROJECT}/jobs/{self.JOB_ID}",
query_params={"projection": "full", "location": "US"},
timeout=1.125,
)
get_query_results_call = mock.call(
method="GET",
path=f"/projects/{self.PROJECT}/queries/{self.JOB_ID}",
query_params={
"maxResults": 0,
"location": "US",
},
timeout=google.cloud.bigquery.client._MIN_GET_QUERY_RESULTS_TIMEOUT,
)
> connection.api_request.assert_has_calls(
[
reload_call,
get_query_results_call,
# Timeout before we can reload with the final job state.
]
)

tests/unit/job/test_query.py:1535:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self =
calls = [call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'US'}, timeo...(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'US'}, timeout=120)]
any_order = False

def assert_has_calls(self, calls, any_order=False):
"""assert the mock has been called with the specified calls.
The `mock_calls` list is checked for the calls.

If `any_order` is False (the default) then the calls must be
sequential. There can be extra calls before or after the
specified calls.

If `any_order` is True then the calls can be in any order, but
they must all appear in `mock_calls`."""
expected = [self._call_matcher(c) for c in calls]
cause = next((e for e in expected if isinstance(e, Exception)), None)
all_calls = _CallList(self._call_matcher(c) for c in self.mock_calls)
if not any_order:
if expected not in all_calls:
if cause is None:
problem = 'Calls not found.'
else:
problem = ('Error processing expected calls.\n'
'Errors: {}').format(
[e if isinstance(e, Exception) else None
for e in expected])
> raise AssertionError(
f'{problem}\n'
f'Expected: {_CallList(calls)}'
f'{self._calls_repr(prefix="Actual").rstrip(".")}'
) from cause
E AssertionError: Calls not found.
E Expected: [call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'US'}, timeout=1.125),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'US'}, timeout=120)]
E Actual: [call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'US'}, timeout=1.125),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'US'}, timeout=120)]

/usr/local/lib/python3.12/unittest/mock.py:981: AssertionError
________ TestQueryJob.test_result_with_done_job_calls_get_query_results ________

self =

def test_result_with_done_job_calls_get_query_results(self):
query_resource_done = {
"jobComplete": True,
"jobReference": {"projectId": self.PROJECT, "jobId": self.JOB_ID},
"schema": {"fields": [{"name": "col1", "type": "STRING"}]},
"totalRows": "1",
}
job_resource = self._make_resource(started=True, ended=True, location="EU")
job_resource["configuration"]["query"]["destinationTable"] = {
"projectId": "dest-project",
"datasetId": "dest_dataset",
"tableId": "dest_table",
}
results_page_resource = {
"totalRows": "1",
"pageToken": None,
"rows": [{"f": [{"v": "abc"}]}],
}
conn = make_connection(query_resource_done, results_page_resource)
client = _make_client(self.PROJECT, connection=conn)
job = self._get_target_class().from_api_repr(job_resource, client)

result = job.result()

rows = list(result)
self.assertEqual(len(rows), 1)
self.assertEqual(rows[0].col1, "abc")

query_results_path = f"/projects/{self.PROJECT}/queries/{self.JOB_ID}"
query_results_call = mock.call(
method="GET",
path=query_results_path,
query_params={"maxResults": 0, "location": "EU"},
timeout=None,
)
query_results_page_call = mock.call(
method="GET",
path=query_results_path,
query_params={
"fields": _LIST_ROWS_FROM_QUERY_RESULTS_FIELDS,
"location": "EU",
"formatOptions.useInt64Timestamp": True,
},
timeout=None,
)
> conn.api_request.assert_has_calls([query_results_call, query_results_page_call])

tests/unit/job/test_query.py:1165:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self =
calls = [call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'EU'}, timeout...s': 'jobReference,totalRows,pageToken,rows', 'location': 'EU', 'formatOptions.useInt64Timestamp': True}, timeout=None)]
any_order = False

def assert_has_calls(self, calls, any_order=False):
"""assert the mock has been called with the specified calls.
The `mock_calls` list is checked for the calls.

If `any_order` is False (the default) then the calls must be
sequential. There can be extra calls before or after the
specified calls.

If `any_order` is True then the calls can be in any order, but
they must all appear in `mock_calls`."""
expected = [self._call_matcher(c) for c in calls]
cause = next((e for e in expected if isinstance(e, Exception)), None)
all_calls = _CallList(self._call_matcher(c) for c in self.mock_calls)
if not any_order:
if expected not in all_calls:
if cause is None:
problem = 'Calls not found.'
else:
problem = ('Error processing expected calls.\n'
'Errors: {}').format(
[e if isinstance(e, Exception) else None
for e in expected])
> raise AssertionError(
f'{problem}\n'
f'Expected: {_CallList(calls)}'
f'{self._calls_repr(prefix="Actual").rstrip(".")}'
) from cause
E AssertionError: Calls not found.
E Expected: [call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'EU'}, timeout=None),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'fields': 'jobReference,totalRows,pageToken,rows', 'location': 'EU', 'formatOptions.useInt64Timestamp': True}, timeout=None)]
E Actual: [call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'EU'}, timeout=None),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(timeout=None, method='GET', path='/projects/project/queries/JOB_ID', query_params={'fields': 'jobReference,totalRows,pageToken,rows', 'location': 'EU', 'formatOptions.useInt64Timestamp': True})]

/usr/local/lib/python3.12/unittest/mock.py:981: AssertionError
__________________ TestQueryJob.test_result_with_max_results ___________________

self =

def test_result_with_max_results(self):
from google.cloud.bigquery.table import RowIterator

query_resource = {
"jobComplete": True,
"jobReference": {"projectId": self.PROJECT, "jobId": self.JOB_ID},
"schema": {"fields": [{"name": "col1", "type": "STRING"}]},
"totalRows": "10",
"pageToken": "first-page-token",
"rows": [
{"f": [{"v": "abc"}]},
{"f": [{"v": "def"}]},
{"f": [{"v": "ghi"}]},
{"f": [{"v": "jkl"}]},
{"f": [{"v": "mno"}]},
{"f": [{"v": "pqr"}]},
# Pretend these are very large rows, so the API doesn't return
# all of the rows we asked for in the first response.
],
}
query_page_resource = {
"totalRows": "10",
"pageToken": None,
"rows": [
{"f": [{"v": "stu"}]},
{"f": [{"v": "vwx"}]},
{"f": [{"v": "yz0"}]},
],
}
job_resource_running = self._make_resource(
started=True, ended=False, location="US"
)
job_resource_done = self._make_resource(started=True, ended=True, location="US")
conn = make_connection(job_resource_done, query_resource, query_page_resource)
client = _make_client(self.PROJECT, connection=conn)
job = self._get_target_class().from_api_repr(job_resource_running, client)

max_results = 9
result = job.result(max_results=max_results)

self.assertIsInstance(result, RowIterator)
self.assertEqual(result.total_rows, 10)

rows = list(result)

self.assertEqual(len(rows), 9)
jobs_get_path = f"/projects/{self.PROJECT}/jobs/{self.JOB_ID}"
jobs_get_call = mock.call(
method="GET",
path=jobs_get_path,
query_params={"projection": "full", "location": "US"},
timeout=DEFAULT_GET_JOB_TIMEOUT,
)
query_results_path = f"/projects/{self.PROJECT}/queries/{self.JOB_ID}"
query_page_waiting_call = mock.call(
method="GET",
path=query_results_path,
query_params={
# Waiting for the results should set maxResults and cache the
# first page if page_size is set. This allows customers to
# more finely tune when we fallback to the BQ Storage API.
# See internal issue: 344008814.
"maxResults": max_results,
"formatOptions.useInt64Timestamp": True,
"location": "US",
},
timeout=None,
)
query_page_2_call = mock.call(
timeout=None,
method="GET",
path=query_results_path,
query_params={
"pageToken": "first-page-token",
"maxResults": 3,
"fields": _LIST_ROWS_FROM_QUERY_RESULTS_FIELDS,
"location": "US",
"formatOptions.useInt64Timestamp": True,
},
)
# Waiting for the results should set maxResults and cache the
# first page if max_results is set. This allows customers to
# more finely tune when we fallback to the BQ Storage API.
# See internal issue: 344008814.
> conn.api_request.assert_has_calls(
[jobs_get_call, query_page_waiting_call, query_page_2_call]
)

tests/unit/job/test_query.py:1323:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self =
calls = [call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'US'}, timeo...ts': 3, 'fields': 'jobReference,totalRows,pageToken,rows', 'location': 'US', 'formatOptions.useInt64Timestamp': True})]
any_order = False

def assert_has_calls(self, calls, any_order=False):
"""assert the mock has been called with the specified calls.
The `mock_calls` list is checked for the calls.

If `any_order` is False (the default) then the calls must be
sequential. There can be extra calls before or after the
specified calls.

If `any_order` is True then the calls can be in any order, but
they must all appear in `mock_calls`."""
expected = [self._call_matcher(c) for c in calls]
cause = next((e for e in expected if isinstance(e, Exception)), None)
all_calls = _CallList(self._call_matcher(c) for c in self.mock_calls)
if not any_order:
if expected not in all_calls:
if cause is None:
problem = 'Calls not found.'
else:
problem = ('Error processing expected calls.\n'
'Errors: {}').format(
[e if isinstance(e, Exception) else None
for e in expected])
> raise AssertionError(
f'{problem}\n'
f'Expected: {_CallList(calls)}'
f'{self._calls_repr(prefix="Actual").rstrip(".")}'
) from cause
E AssertionError: Calls not found.
E Expected: [call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'US'}, timeout=128),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 9, 'formatOptions.useInt64Timestamp': True, 'location': 'US'}, timeout=None),
E call(timeout=None, method='GET', path='/projects/project/queries/JOB_ID', query_params={'pageToken': 'first-page-token', 'maxResults': 3, 'fields': 'jobReference,totalRows,pageToken,rows', 'location': 'US', 'formatOptions.useInt64Timestamp': True})]
E Actual: [call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'US'}, timeout=128),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 9, 'formatOptions.useInt64Timestamp': True, 'location': 'US'}, timeout=None),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(timeout=None, method='GET', path='/projects/project/queries/JOB_ID', query_params={'pageToken': 'first-page-token', 'maxResults': 3, 'fields': 'jobReference,totalRows,pageToken,rows', 'location': 'US', 'formatOptions.useInt64Timestamp': True})]

/usr/local/lib/python3.12/unittest/mock.py:981: AssertionError
_____________ TestClient.test_create_routine_w_conflict_exists_ok ______________

self =

def test_create_routine_w_conflict_exists_ok(self):
from google.cloud.bigquery.routine import Routine

creds = _make_credentials()
client = self._make_one(project=self.PROJECT, credentials=creds)
resource = {
"routineReference": {
"projectId": "test-routine-project",
"datasetId": "test_routines",
"routineId": "minimal_routine",
}
}
path = "/projects/test-routine-project/datasets/test_routines/routines"

conn = client._connection = make_connection(
google.api_core.exceptions.AlreadyExists("routine already exists"), resource
)
full_routine_id = "test-routine-project.test_routines.minimal_routine"
routine = Routine(full_routine_id)
with mock.patch(
"google.cloud.bigquery.opentelemetry_tracing._get_final_span_attributes"
) as final_attributes:
actual_routine = client.create_routine(routine, exists_ok=True)

final_attributes.assert_called_with(
{"path": "%s/minimal_routine" % path}, client, None
)

self.assertEqual(actual_routine.project, "test-routine-project")
self.assertEqual(actual_routine.dataset_id, "test_routines")
self.assertEqual(actual_routine.routine_id, "minimal_routine")
> conn.api_request.assert_has_calls(
[
mock.call(
method="POST",
path=path,
data=resource,
timeout=DEFAULT_TIMEOUT,
),
mock.call(
method="GET",
path="/projects/test-routine-project/datasets/test_routines/routines/minimal_routine",
timeout=DEFAULT_TIMEOUT,
),
]
)

tests/unit/test_client.py:1058:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self =
calls = [call(method='POST', path='/projects/test-routine-project/datasets/test_routines/routines', data={'routineReference': ...all(method='GET', path='/projects/test-routine-project/datasets/test_routines/routines/minimal_routine', timeout=None)]
any_order = False

def assert_has_calls(self, calls, any_order=False):
"""assert the mock has been called with the specified calls.
The `mock_calls` list is checked for the calls.

If `any_order` is False (the default) then the calls must be
sequential. There can be extra calls before or after the
specified calls.

If `any_order` is True then the calls can be in any order, but
they must all appear in `mock_calls`."""
expected = [self._call_matcher(c) for c in calls]
cause = next((e for e in expected if isinstance(e, Exception)), None)
all_calls = _CallList(self._call_matcher(c) for c in self.mock_calls)
if not any_order:
if expected not in all_calls:
if cause is None:
problem = 'Calls not found.'
else:
problem = ('Error processing expected calls.\n'
'Errors: {}').format(
[e if isinstance(e, Exception) else None
for e in expected])
> raise AssertionError(
f'{problem}\n'
f'Expected: {_CallList(calls)}'
f'{self._calls_repr(prefix="Actual").rstrip(".")}'
) from cause
E AssertionError: Calls not found.
E Expected: [call(method='POST', path='/projects/test-routine-project/datasets/test_routines/routines', data={'routineReference': {'projectId': 'test-routine-project', 'datasetId': 'test_routines', 'routineId': 'minimal_routine'}}, timeout=None),
E call(method='GET', path='/projects/test-routine-project/datasets/test_routines/routines/minimal_routine', timeout=None)]
E Actual: [call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='POST', path='/projects/test-routine-project/datasets/test_routines/routines', data={'routineReference': {'projectId': 'test-routine-project', 'datasetId': 'test_routines', 'routineId': 'minimal_routine'}}, timeout=None),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/test-routine-project/datasets/test_routines/routines/minimal_routine', timeout=None)]

/usr/local/lib/python3.12/unittest/mock.py:981: AssertionError
_________ TestClient.test_create_table_alreadyexists_w_exists_ok_true __________

self =

def test_create_table_alreadyexists_w_exists_ok_true(self):
post_path = "/projects/{}/datasets/{}/tables".format(self.PROJECT, self.DS_ID)
get_path = "/projects/{}/datasets/{}/tables/{}".format(
self.PROJECT, self.DS_ID, self.TABLE_ID
)
resource = self._make_table_resource()
creds = _make_credentials()
client = self._make_one(
project=self.PROJECT, credentials=creds, location=self.LOCATION
)
conn = client._connection = make_connection(
google.api_core.exceptions.AlreadyExists("table already exists"), resource
)

with mock.patch(
"google.cloud.bigquery.opentelemetry_tracing._get_final_span_attributes"
) as final_attributes:
got = client.create_table(
"{}.{}".format(self.DS_ID, self.TABLE_ID), exists_ok=True
)

final_attributes.assert_called_with({"path": get_path}, client, None)

self.assertEqual(got.project, self.PROJECT)
self.assertEqual(got.dataset_id, self.DS_ID)
self.assertEqual(got.table_id, self.TABLE_ID)

> conn.api_request.assert_has_calls(
[
mock.call(
method="POST",
path=post_path,
data={
"tableReference": {
"projectId": self.PROJECT,
"datasetId": self.DS_ID,
"tableId": self.TABLE_ID,
},
"labels": {},
},
timeout=DEFAULT_TIMEOUT,
),
mock.call(method="GET", path=get_path, timeout=DEFAULT_TIMEOUT),
]
)

tests/unit/test_client.py:1510:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self =
calls = [call(method='POST', path='/projects/PROJECT/datasets/DATASET_ID/tables', data={'tableReference': {'projectId': 'PROJE...s': {}}, timeout=None), call(method='GET', path='/projects/PROJECT/datasets/DATASET_ID/tables/TABLE_ID', timeout=None)]
any_order = False

def assert_has_calls(self, calls, any_order=False):
"""assert the mock has been called with the specified calls.
The `mock_calls` list is checked for the calls.

If `any_order` is False (the default) then the calls must be
sequential. There can be extra calls before or after the
specified calls.

If `any_order` is True then the calls can be in any order, but
they must all appear in `mock_calls`."""
expected = [self._call_matcher(c) for c in calls]
cause = next((e for e in expected if isinstance(e, Exception)), None)
all_calls = _CallList(self._call_matcher(c) for c in self.mock_calls)
if not any_order:
if expected not in all_calls:
if cause is None:
problem = 'Calls not found.'
else:
problem = ('Error processing expected calls.\n'
'Errors: {}').format(
[e if isinstance(e, Exception) else None
for e in expected])
> raise AssertionError(
f'{problem}\n'
f'Expected: {_CallList(calls)}'
f'{self._calls_repr(prefix="Actual").rstrip(".")}'
) from cause
E AssertionError: Calls not found.
E Expected: [call(method='POST', path='/projects/PROJECT/datasets/DATASET_ID/tables', data={'tableReference': {'projectId': 'PROJECT', 'datasetId': 'DATASET_ID', 'tableId': 'TABLE_ID'}, 'labels': {}}, timeout=None),
E call(method='GET', path='/projects/PROJECT/datasets/DATASET_ID/tables/TABLE_ID', timeout=None)]
E Actual: [call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='POST', path='/projects/PROJECT/datasets/DATASET_ID/tables', data={'tableReference': {'projectId': 'PROJECT', 'datasetId': 'DATASET_ID', 'tableId': 'TABLE_ID'}, 'labels': {}}, timeout=None),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/PROJECT/datasets/DATASET_ID/tables/TABLE_ID', timeout=None)]

/usr/local/lib/python3.12/unittest/mock.py:981: AssertionError
_____________ TestClient.test_list_rows_w_start_index_w_page_size ______________

self =

def test_list_rows_w_start_index_w_page_size(self):
from google.cloud.bigquery.schema import SchemaField
from google.cloud.bigquery.table import Table
from google.cloud.bigquery.table import Row

PATH = "projects/%s/datasets/%s/tables/%s/data" % (
self.PROJECT,
self.DS_ID,
self.TABLE_ID,
)

page_1 = {
"totalRows": 4,
"pageToken": "some-page-token",
"rows": [
{"f": [{"v": "Phred Phlyntstone"}]},
{"f": [{"v": "Bharney Rhubble"}]},
],
}
page_2 = {
"totalRows": 4,
"rows": [
{"f": [{"v": "Wylma Phlyntstone"}]},
{"f": [{"v": "Bhettye Rhubble"}]},
],
}
creds = _make_credentials()
http = object()
client = self._make_one(project=self.PROJECT, credentials=creds, _http=http)
conn = client._connection = make_connection(page_1, page_2)
full_name = SchemaField("full_name", "STRING", mode="REQUIRED")
table = Table(self.TABLE_REF, schema=[full_name])
iterator = client.list_rows(table, max_results=4, page_size=2, start_index=1)
pages = iterator.pages
rows = list(next(pages))
extra_params = iterator.extra_params
f2i = {"full_name": 0}
self.assertEqual(len(rows), 2)
self.assertEqual(rows[0], Row(("Phred Phlyntstone",), f2i))
self.assertEqual(rows[1], Row(("Bharney Rhubble",), f2i))

rows = list(next(pages))

self.assertEqual(len(rows), 2)
self.assertEqual(rows[0], Row(("Wylma Phlyntstone",), f2i))
self.assertEqual(rows[1], Row(("Bhettye Rhubble",), f2i))
self.assertEqual(
extra_params, {"startIndex": 1, "formatOptions.useInt64Timestamp": True}
)

> conn.api_request.assert_has_calls(
[
mock.call(
method="GET",
path="/%s" % PATH,
query_params={
"startIndex": 1,
"maxResults": 2,
"formatOptions.useInt64Timestamp": True,
},
timeout=DEFAULT_TIMEOUT,
),
mock.call(
method="GET",
path="/%s" % PATH,
query_params={
"pageToken": "some-page-token",
"maxResults": 2,
"formatOptions.useInt64Timestamp": True,
},
timeout=DEFAULT_TIMEOUT,
),
]
)

tests/unit/test_client.py:6858:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self =
calls = [call(method='GET', path='/projects/PROJECT/datasets/DATASET_ID/tables/TABLE_ID/data', query_params={'startIndex': 1, ...query_params={'pageToken': 'some-page-token', 'maxResults': 2, 'formatOptions.useInt64Timestamp': True}, timeout=None)]
any_order = False

def assert_has_calls(self, calls, any_order=False):
"""assert the mock has been called with the specified calls.
The `mock_calls` list is checked for the calls.

If `any_order` is False (the default) then the calls must be
sequential. There can be extra calls before or after the
specified calls.

If `any_order` is True then the calls can be in any order, but
they must all appear in `mock_calls`."""
expected = [self._call_matcher(c) for c in calls]
cause = next((e for e in expected if isinstance(e, Exception)), None)
all_calls = _CallList(self._call_matcher(c) for c in self.mock_calls)
if not any_order:
if expected not in all_calls:
if cause is None:
problem = 'Calls not found.'
else:
problem = ('Error processing expected calls.\n'
'Errors: {}').format(
[e if isinstance(e, Exception) else None
for e in expected])
> raise AssertionError(
f'{problem}\n'
f'Expected: {_CallList(calls)}'
f'{self._calls_repr(prefix="Actual").rstrip(".")}'
) from cause
E AssertionError: Calls not found.
E Expected: [call(method='GET', path='/projects/PROJECT/datasets/DATASET_ID/tables/TABLE_ID/data', query_params={'startIndex': 1, 'maxResults': 2, 'formatOptions.useInt64Timestamp': True}, timeout=None),
E call(method='GET', path='/projects/PROJECT/datasets/DATASET_ID/tables/TABLE_ID/data', query_params={'pageToken': 'some-page-token', 'maxResults': 2, 'formatOptions.useInt64Timestamp': True}, timeout=None)]
E Actual: [call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(timeout=None, method='GET', path='/projects/PROJECT/datasets/DATASET_ID/tables/TABLE_ID/data', query_params={'maxResults': 2, 'startIndex': 1, 'formatOptions.useInt64Timestamp': True}),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(timeout=None, method='GET', path='/projects/PROJECT/datasets/DATASET_ID/tables/TABLE_ID/data', query_params={'pageToken': 'some-page-token', 'maxResults': 2, 'formatOptions.useInt64Timestamp': True})]

/usr/local/lib/python3.12/unittest/mock.py:981: AssertionError
_________ TestClient.test_query_and_wait_w_page_size_multiple_requests _________

self =

def test_query_and_wait_w_page_size_multiple_requests(self):
"""
For queries that last longer than the intial (about 10s) call to
jobs.query, we should still pass through the page size to the
subsequent calls to jobs.getQueryResults.

See internal issue 344008814.
"""
query = "select count(*) from `bigquery-public-data.usa_names.usa_1910_2013`"
job_reference = {
"projectId": "my-jobs-project",
"location": "my-jobs-location",
"jobId": "my-jobs-id",
}
jobs_query_response = {
"jobComplete": False,
"jobReference": job_reference,
}
jobs_get_response = {
"jobReference": job_reference,
"status": {"state": "DONE"},
}
get_query_results_response = {
"jobComplete": True,
}
creds = _make_credentials()
http = object()
client = self._make_one(project=self.PROJECT, credentials=creds, _http=http)
conn = client._connection = make_connection(
jobs_query_response,
jobs_get_response,
get_query_results_response,
)

_ = client.query_and_wait(query, page_size=11)

> conn.api_request.assert_has_calls(
[
# Verify the request we send is to jobs.query.
mock.call(
method="POST",
path=f"/projects/{self.PROJECT}/queries",
data={
"useLegacySql": False,
"query": query,
"formatOptions": {"useInt64Timestamp": True},
"maxResults": 11,
"requestId": mock.ANY,
},
timeout=None,
),
# jobs.get: Check if the job has finished.
mock.call(
method="GET",
path="/projects/my-jobs-project/jobs/my-jobs-id",
query_params={
"projection": "full",
"location": "my-jobs-location",
},
timeout=google.cloud.bigquery.retry.DEFAULT_GET_JOB_TIMEOUT,
),
# jobs.getQueryResults: wait for the query / fetch first page
mock.call(
method="GET",
path="/projects/my-jobs-project/queries/my-jobs-id",
query_params={
# We should still pass through the page size to the
# subsequent calls to jobs.getQueryResults.
#
# See internal issue 344008814.
"maxResults": 11,
"formatOptions.useInt64Timestamp": True,
"location": "my-jobs-location",
},
timeout=None,
),
]
)

tests/unit/test_client.py:5526:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self =
calls = [call(method='POST', path='/projects/PROJECT/queries', data={'useLegacySql': False, 'query': 'select count(*) from `bi...uery_params={'maxResults': 11, 'formatOptions.useInt64Timestamp': True, 'location': 'my-jobs-location'}, timeout=None)]
any_order = False

def assert_has_calls(self, calls, any_order=False):
"""assert the mock has been called with the specified calls.
The `mock_calls` list is checked for the calls.

If `any_order` is False (the default) then the calls must be
sequential. There can be extra calls before or after the
specified calls.

If `any_order` is True then the calls can be in any order, but
they must all appear in `mock_calls`."""
expected = [self._call_matcher(c) for c in calls]
cause = next((e for e in expected if isinstance(e, Exception)), None)
all_calls = _CallList(self._call_matcher(c) for c in self.mock_calls)
if not any_order:
if expected not in all_calls:
if cause is None:
problem = 'Calls not found.'
else:
problem = ('Error processing expected calls.\n'
'Errors: {}').format(
[e if isinstance(e, Exception) else None
for e in expected])
> raise AssertionError(
f'{problem}\n'
f'Expected: {_CallList(calls)}'
f'{self._calls_repr(prefix="Actual").rstrip(".")}'
) from cause
E AssertionError: Calls not found.
E Expected: [call(method='POST', path='/projects/PROJECT/queries', data={'useLegacySql': False, 'query': 'select count(*) from `bigquery-public-data.usa_names.usa_1910_2013`', 'formatOptions': {'useInt64Timestamp': True}, 'maxResults': 11, 'requestId': }, timeout=None),
E call(method='GET', path='/projects/my-jobs-project/jobs/my-jobs-id', query_params={'projection': 'full', 'location': 'my-jobs-location'}, timeout=128),
E call(method='GET', path='/projects/my-jobs-project/queries/my-jobs-id', query_params={'maxResults': 11, 'formatOptions.useInt64Timestamp': True, 'location': 'my-jobs-location'}, timeout=None)]
E Actual: [call(method='POST', path='/projects/PROJECT/queries', data={'useLegacySql': False, 'formatOptions': {'useInt64Timestamp': True}, 'query': 'select count(*) from `bigquery-public-data.usa_names.usa_1910_2013`', 'maxResults': 11, 'requestId': '2e709470-030b-477a-87fc-40e98174d9b6'}, timeout=None),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/my-jobs-project/jobs/my-jobs-id', query_params={'projection': 'full', 'location': 'my-jobs-location'}, timeout=128),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/my-jobs-project/queries/my-jobs-id', query_params={'maxResults': 11, 'formatOptions.useInt64Timestamp': True, 'location': 'my-jobs-location'}, timeout=None)]

/usr/local/lib/python3.12/unittest/mock.py:981: AssertionError
______________ test_create_dataset_alreadyexists_w_exists_ok_true ______________

PROJECT = 'PROJECT', DS_ID = 'DATASET_ID', LOCATION = 'us-central'

def test_create_dataset_alreadyexists_w_exists_ok_true(PROJECT, DS_ID, LOCATION):
post_path = "/projects/{}/datasets".format(PROJECT)
get_path = "/projects/{}/datasets/{}".format(PROJECT, DS_ID)
resource = {
"datasetReference": {"projectId": PROJECT, "datasetId": DS_ID},
"etag": "etag",
"id": "{}:{}".format(PROJECT, DS_ID),
"location": LOCATION,
}
client = make_client(location=LOCATION)
conn = client._connection = make_connection(
google.api_core.exceptions.AlreadyExists("dataset already exists"), resource
)
with mock.patch(
"google.cloud.bigquery.opentelemetry_tracing._get_final_span_attributes"
) as final_attributes:
dataset = client.create_dataset(DS_ID, exists_ok=True)

final_attributes.assert_called_with({"path": get_path}, client, None)

assert dataset.dataset_id == DS_ID
assert dataset.project == PROJECT
assert dataset.etag == resource["etag"]
assert dataset.full_dataset_id == resource["id"]
assert dataset.location == LOCATION

> conn.api_request.assert_has_calls(
[
mock.call(
method="POST",
path=post_path,
data={
"datasetReference": {"projectId": PROJECT, "datasetId": DS_ID},
"labels": {},
"location": LOCATION,
},
timeout=DEFAULT_TIMEOUT,
),
mock.call(method="GET", path=get_path, timeout=DEFAULT_TIMEOUT),
]
)

tests/unit/test_create_dataset.py:358:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self =
calls = [call(method='POST', path='/projects/PROJECT/datasets', data={'datasetReference': {'projectId': 'PROJECT', 'datasetId'...ocation': 'us-central'}, timeout=None), call(method='GET', path='/projects/PROJECT/datasets/DATASET_ID', timeout=None)]
any_order = False

def assert_has_calls(self, calls, any_order=False):
"""assert the mock has been called with the specified calls.
The `mock_calls` list is checked for the calls.

If `any_order` is False (the default) then the calls must be
sequential. There can be extra calls before or after the
specified calls.

If `any_order` is True then the calls can be in any order, but
they must all appear in `mock_calls`."""
expected = [self._call_matcher(c) for c in calls]
cause = next((e for e in expected if isinstance(e, Exception)), None)
all_calls = _CallList(self._call_matcher(c) for c in self.mock_calls)
if not any_order:
if expected not in all_calls:
if cause is None:
problem = 'Calls not found.'
else:
problem = ('Error processing expected calls.\n'
'Errors: {}').format(
[e if isinstance(e, Exception) else None
for e in expected])
> raise AssertionError(
f'{problem}\n'
f'Expected: {_CallList(calls)}'
f'{self._calls_repr(prefix="Actual").rstrip(".")}'
) from cause
E AssertionError: Calls not found.
E Expected: [call(method='POST', path='/projects/PROJECT/datasets', data={'datasetReference': {'projectId': 'PROJECT', 'datasetId': 'DATASET_ID'}, 'labels': {}, 'location': 'us-central'}, timeout=None),
E call(method='GET', path='/projects/PROJECT/datasets/DATASET_ID', timeout=None)]
E Actual: [call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='POST', path='/projects/PROJECT/datasets', data={'datasetReference': {'projectId': 'PROJECT', 'datasetId': 'DATASET_ID'}, 'labels': {}, 'location': 'us-central'}, timeout=None),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/PROJECT/datasets/DATASET_ID', timeout=None)]

/usr/local/lib/python3.12/unittest/mock.py:981: AssertionError
__ test_retry_connection_error_with_default_retries_and_successful_first_job ___

monkeypatch = <_pytest.monkeypatch.MonkeyPatch object at 0x7f93e7085b20>
client =

def test_retry_connection_error_with_default_retries_and_successful_first_job(
monkeypatch, client
):
"""
Make sure ConnectionError can be retried at `is_job_done` level, even if
retries are exhaused by API-level retry.

Note: Because restart_query_job is set to True only in the case of a
confirmed job failure, this should be safe to do even when a job is not
idempotent.

Regression test for issue
https://github.com/googleapis/python-bigquery/issues/1929
"""
job_counter = 0

def make_job_id(*args, **kwargs):
nonlocal job_counter
job_counter += 1
return f"{job_counter}"

monkeypatch.setattr(_job_helpers, "make_job_id", make_job_id)
conn = client._connection = make_connection()
project = client.project
job_reference_1 = {"projectId": project, "jobId": "1", "location": "test-loc"}
NUM_API_RETRIES = 2

with freezegun.freeze_time(
"2024-01-01 00:00:00",
# Note: because of exponential backoff and a bit of jitter,
# NUM_API_RETRIES will get less accurate the greater the value.
# We add 1 because we know there will be at least some additional
# calls to fetch the time / sleep before the retry deadline is hit.
auto_tick_seconds=(
google.cloud.bigquery.retry._DEFAULT_RETRY_DEADLINE / NUM_API_RETRIES
)
+ 1,
):
conn.api_request.side_effect = [
# jobs.insert
{"jobReference": job_reference_1, "status": {"state": "PENDING"}},
# jobs.get
{"jobReference": job_reference_1, "status": {"state": "RUNNING"}},
# jobs.getQueryResults x2
requests.exceptions.ConnectionError(),
requests.exceptions.ConnectionError(),
# jobs.get
# Job actually succeeeded, so we shouldn't be restarting the job,
# even though we are retrying at the `is_job_done` level.
{"jobReference": job_reference_1, "status": {"state": "DONE"}},
# jobs.getQueryResults
{"jobReference": job_reference_1, "jobComplete": True},
]

job = client.query("select 1")
rows_iter = job.result()

assert job.done() # Shouldn't make any additional API calls.
assert rows_iter is not None

# Should only have created one job, even though we did call job_retry.
assert job_counter == 1

# Double-check that we made the API calls we expected to make.
> conn.api_request.assert_has_calls(
[
# jobs.insert
mock.call(
method="POST",
path="/projects/PROJECT/jobs",
data={
"jobReference": {"jobId": "1", "projectId": "PROJECT"},
"configuration": {
"query": {"useLegacySql": False, "query": "select 1"}
},
},
timeout=None,
),
# jobs.get
mock.call(
method="GET",
path="/projects/PROJECT/jobs/1",
query_params={"location": "test-loc", "projection": "full"},
timeout=google.cloud.bigquery.retry.DEFAULT_GET_JOB_TIMEOUT,
),
# jobs.getQueryResults x2
mock.call(
method="GET",
path="/projects/PROJECT/queries/1",
query_params={"maxResults": 0, "location": "test-loc"},
timeout=None,
),
mock.call(
method="GET",
path="/projects/PROJECT/queries/1",
query_params={"maxResults": 0, "location": "test-loc"},
timeout=None,
),
# jobs.get -- is_job_done checking again
mock.call(
method="GET",
path="/projects/PROJECT/jobs/1",
query_params={"location": "test-loc", "projection": "full"},
timeout=google.cloud.bigquery.retry.DEFAULT_GET_JOB_TIMEOUT,
),
# jobs.getQueryResults
mock.call(
method="GET",
path="/projects/PROJECT/queries/1",
query_params={"maxResults": 0, "location": "test-loc"},
timeout=None,
),
],
)

tests/unit/test_job_retry.py:204:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self =
calls = [call(method='POST', path='/projects/PROJECT/jobs', data={'jobReference': {'jobId': '1', 'projectId': 'PROJECT'}, 'con...ethod='GET', path='/projects/PROJECT/queries/1', query_params={'maxResults': 0, 'location': 'test-loc'}, timeout=None)]
any_order = False

def assert_has_calls(self, calls, any_order=False):
"""assert the mock has been called with the specified calls.
The `mock_calls` list is checked for the calls.

If `any_order` is False (the default) then the calls must be
sequential. There can be extra calls before or after the
specified calls.

If `any_order` is True then the calls can be in any order, but
they must all appear in `mock_calls`."""
expected = [self._call_matcher(c) for c in calls]
cause = next((e for e in expected if isinstance(e, Exception)), None)
all_calls = _CallList(self._call_matcher(c) for c in self.mock_calls)
if not any_order:
if expected not in all_calls:
if cause is None:
problem = 'Calls not found.'
else:
problem = ('Error processing expected calls.\n'
'Errors: {}').format(
[e if isinstance(e, Exception) else None
for e in expected])
> raise AssertionError(
f'{problem}\n'
f'Expected: {_CallList(calls)}'
f'{self._calls_repr(prefix="Actual").rstrip(".")}'
) from cause
E AssertionError: Calls not found.
E Expected: [call(method='POST', path='/projects/PROJECT/jobs', data={'jobReference': {'jobId': '1', 'projectId': 'PROJECT'}, 'configuration': {'query': {'useLegacySql': False, 'query': 'select 1'}}}, timeout=None),
E call(method='GET', path='/projects/PROJECT/jobs/1', query_params={'location': 'test-loc', 'projection': 'full'}, timeout=128),
E call(method='GET', path='/projects/PROJECT/queries/1', query_params={'maxResults': 0, 'location': 'test-loc'}, timeout=None),
E call(method='GET', path='/projects/PROJECT/queries/1', query_params={'maxResults': 0, 'location': 'test-loc'}, timeout=None),
E call(method='GET', path='/projects/PROJECT/jobs/1', query_params={'location': 'test-loc', 'projection': 'full'}, timeout=128),
E call(method='GET', path='/projects/PROJECT/queries/1', query_params={'maxResults': 0, 'location': 'test-loc'}, timeout=None)]
E Actual: [call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='POST', path='/projects/PROJECT/jobs', data={'jobReference': {'jobId': '1', 'projectId': 'PROJECT'}, 'configuration': {'query': {'useLegacySql': False, 'query': 'select 1'}}}, timeout=None),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/PROJECT/jobs/1', query_params={'projection': 'full', 'location': 'test-loc'}, timeout=128),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/PROJECT/queries/1', query_params={'maxResults': 0, 'location': 'test-loc'}, timeout=None),
E call(method='GET', path='/projects/PROJECT/queries/1', query_params={'maxResults': 0, 'location': 'test-loc'}, timeout=None),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/PROJECT/jobs/1', query_params={'projection': 'full', 'location': 'test-loc'}, timeout=128),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/PROJECT/queries/1', query_params={'maxResults': 0, 'location': 'test-loc'}, timeout=None)]

/usr/local/lib/python3.12/unittest/mock.py:981: AssertionError
_ test_query_retry_with_default_retry_and_ambiguous_errors_only_retries_with_failed_job _

client =
monkeypatch = <_pytest.monkeypatch.MonkeyPatch object at 0x7f93e7a43f80>

def test_query_retry_with_default_retry_and_ambiguous_errors_only_retries_with_failed_job(
client, monkeypatch
):
"""
Some errors like 'rateLimitExceeded' can be ambiguous. Make sure we only
retry the job when we know for sure that the job has failed for a retriable
reason. We can only be sure after a "successful" call to jobs.get to fetch
the failed job status.
"""
job_counter = 0

def make_job_id(*args, **kwargs):
nonlocal job_counter
job_counter += 1
return f"{job_counter}"

monkeypatch.setattr(_job_helpers, "make_job_id", make_job_id)

project = client.project
job_reference_1 = {"projectId": project, "jobId": "1", "location": "test-loc"}
job_reference_2 = {"projectId": project, "jobId": "2", "location": "test-loc"}
NUM_API_RETRIES = 2

# This error is modeled after a real customer exception in
# https://github.com/googleapis/python-bigquery/issues/707.
internal_error = google.api_core.exceptions.InternalServerError(
"Job failed just because...",
errors=[
{"reason": "internalError"},
],
)
responses = [
# jobs.insert
{"jobReference": job_reference_1, "status": {"state": "PENDING"}},
# jobs.get
{"jobReference": job_reference_1, "status": {"state": "RUNNING"}},
# jobs.getQueryResults x2
#
# Note: internalError is ambiguous in jobs.getQueryResults. The
# problem could be at the Google Frontend level or it could be because
# the job has failed due to some transient issues and the BigQuery
# REST API is translating the job failed status into failure HTTP
# codes.
#
# TODO(GH#1903): We shouldn't retry nearly this many times when we get
# ambiguous errors from jobs.getQueryResults.
# See: https://github.com/googleapis/python-bigquery/issues/1903
internal_error,
internal_error,
# jobs.get -- the job has failed
{
"jobReference": job_reference_1,
"status": {"state": "DONE", "errorResult": {"reason": "internalError"}},
},
# jobs.insert
{"jobReference": job_reference_2, "status": {"state": "PENDING"}},
# jobs.get
{"jobReference": job_reference_2, "status": {"state": "RUNNING"}},
# jobs.getQueryResults
{"jobReference": job_reference_2, "jobComplete": True},
# jobs.get
{"jobReference": job_reference_2, "status": {"state": "DONE"}},
]

conn = client._connection = make_connection(*responses)

with freezegun.freeze_time(
# Note: because of exponential backoff and a bit of jitter,
# NUM_API_RETRIES will get less accurate the greater the value.
# We add 1 because we know there will be at least some additional
# calls to fetch the time / sleep before the retry deadline is hit.
auto_tick_seconds=(
google.cloud.bigquery.retry._DEFAULT_RETRY_DEADLINE / NUM_API_RETRIES
)
+ 1,
):
job = client.query("select 1")
job.result()

> conn.api_request.assert_has_calls(
[
# jobs.insert
mock.call(
method="POST",
path="/projects/PROJECT/jobs",
data={
"jobReference": {"jobId": "1", "projectId": "PROJECT"},
"configuration": {
"query": {"useLegacySql": False, "query": "select 1"}
},
},
timeout=None,
),
# jobs.get
mock.call(
method="GET",
path="/projects/PROJECT/jobs/1",
query_params={"location": "test-loc", "projection": "full"},
timeout=google.cloud.bigquery.retry.DEFAULT_GET_JOB_TIMEOUT,
),
# jobs.getQueryResults x2
mock.call(
method="GET",
path="/projects/PROJECT/queries/1",
query_params={"maxResults": 0, "location": "test-loc"},
timeout=None,
),
mock.call(
method="GET",
path="/projects/PROJECT/queries/1",
query_params={"maxResults": 0, "location": "test-loc"},
timeout=None,
),
# jobs.get -- verify that the job has failed
mock.call(
method="GET",
path="/projects/PROJECT/jobs/1",
query_params={"location": "test-loc", "projection": "full"},
timeout=google.cloud.bigquery.retry.DEFAULT_GET_JOB_TIMEOUT,
),
# jobs.insert
mock.call(
method="POST",
path="/projects/PROJECT/jobs",
data={
"jobReference": {
# Make sure that we generated a new job ID.
"jobId": "2",
"projectId": "PROJECT",
},
"configuration": {
"query": {"useLegacySql": False, "query": "select 1"}
},
},
timeout=None,
),
# jobs.get
mock.call(
method="GET",
path="/projects/PROJECT/jobs/2",
query_params={"location": "test-loc", "projection": "full"},
timeout=google.cloud.bigquery.retry.DEFAULT_GET_JOB_TIMEOUT,
),
# jobs.getQueryResults
mock.call(
method="GET",
path="/projects/PROJECT/queries/2",
query_params={"maxResults": 0, "location": "test-loc"},
timeout=None,
),
# jobs.get
mock.call(
method="GET",
path="/projects/PROJECT/jobs/2",
query_params={"location": "test-loc", "projection": "full"},
timeout=google.cloud.bigquery.retry.DEFAULT_GET_JOB_TIMEOUT,
),
]
)

tests/unit/test_job_retry.py:335:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self =
calls = [call(method='POST', path='/projects/PROJECT/jobs', data={'jobReference': {'jobId': '1', 'projectId': 'PROJECT'}, 'con...'projectId': 'PROJECT'}, 'configuration': {'query': {'useLegacySql': False, 'query': 'select 1'}}}, timeout=None), ...]
any_order = False

def assert_has_calls(self, calls, any_order=False):
"""assert the mock has been called with the specified calls.
The `mock_calls` list is checked for the calls.

If `any_order` is False (the default) then the calls must be
sequential. There can be extra calls before or after the
specified calls.

If `any_order` is True then the calls can be in any order, but
they must all appear in `mock_calls`."""
expected = [self._call_matcher(c) for c in calls]
cause = next((e for e in expected if isinstance(e, Exception)), None)
all_calls = _CallList(self._call_matcher(c) for c in self.mock_calls)
if not any_order:
if expected not in all_calls:
if cause is None:
problem = 'Calls not found.'
else:
problem = ('Error processing expected calls.\n'
'Errors: {}').format(
[e if isinstance(e, Exception) else None
for e in expected])
> raise AssertionError(
f'{problem}\n'
f'Expected: {_CallList(calls)}'
f'{self._calls_repr(prefix="Actual").rstrip(".")}'
) from cause
E AssertionError: Calls not found.
E Expected: [call(method='POST', path='/projects/PROJECT/jobs', data={'jobReference': {'jobId': '1', 'projectId': 'PROJECT'}, 'configuration': {'query': {'useLegacySql': False, 'query': 'select 1'}}}, timeout=None),
E call(method='GET', path='/projects/PROJECT/jobs/1', query_params={'location': 'test-loc', 'projection': 'full'}, timeout=128),
E call(method='GET', path='/projects/PROJECT/queries/1', query_params={'maxResults': 0, 'location': 'test-loc'}, timeout=None),
E call(method='GET', path='/projects/PROJECT/queries/1', query_params={'maxResults': 0, 'location': 'test-loc'}, timeout=None),
E call(method='GET', path='/projects/PROJECT/jobs/1', query_params={'location': 'test-loc', 'projection': 'full'}, timeout=128),
E call(method='POST', path='/projects/PROJECT/jobs', data={'jobReference': {'jobId': '2', 'projectId': 'PROJECT'}, 'configuration': {'query': {'useLegacySql': False, 'query': 'select 1'}}}, timeout=None),
E call(method='GET', path='/projects/PROJECT/jobs/2', query_params={'location': 'test-loc', 'projection': 'full'}, timeout=128),
E call(method='GET', path='/projects/PROJECT/queries/2', query_params={'maxResults': 0, 'location': 'test-loc'}, timeout=None),
E call(method='GET', path='/projects/PROJECT/jobs/2', query_params={'location': 'test-loc', 'projection': 'full'}, timeout=128)]
E Actual: [call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='POST', path='/projects/PROJECT/jobs', data={'jobReference': {'jobId': '1', 'projectId': 'PROJECT'}, 'configuration': {'query': {'useLegacySql': False, 'query': 'select 1'}}}, timeout=None),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/PROJECT/jobs/1', query_params={'projection': 'full', 'location': 'test-loc'}, timeout=128),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/PROJECT/queries/1', query_params={'maxResults': 0, 'location': 'test-loc'}, timeout=None),
E call(method='GET', path='/projects/PROJECT/queries/1', query_params={'maxResults': 0, 'location': 'test-loc'}, timeout=None),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/PROJECT/jobs/1', query_params={'projection': 'full', 'location': 'test-loc'}, timeout=128),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='POST', path='/projects/PROJECT/jobs', data={'jobReference': {'jobId': '2', 'projectId': 'PROJECT'}, 'configuration': {'query': {'useLegacySql': False, 'query': 'select 1'}}}, timeout=None),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/PROJECT/jobs/2', query_params={'projection': 'full', 'location': 'test-loc'}, timeout=128),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/PROJECT/queries/2', query_params={'maxResults': 0, 'location': 'test-loc'}, timeout=None),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/PROJECT/jobs/2', query_params={'projection': 'full', 'location': 'test-loc'}, timeout=128)]

/usr/local/lib/python3.12/unittest/mock.py:981: AssertionError
=============================== warnings summary ===============================
tests/unit/test_opentelemetry_tracing.py:45
/tmpfs/src/github/python-bigquery/tests/unit/test_opentelemetry_tracing.py:45: PytestRemovedIn9Warning: Marks applied to fixtures have no effect
See docs: https://docs.pytest.org/en/stable/deprecations.html#applying-a-mark-to-a-fixture-function
@pytest.mark.skipif(opentelemetry is None, reason="Require `opentelemetry`")

tests/unit/job/test_base.py::Test_AsyncJob::test__set_properties_w_creation_time
tests/unit/job/test_base.py::Test_AsyncJob::test__set_properties_w_end_time
tests/unit/job/test_base.py::Test_AsyncJob::test__set_properties_w_start_time
tests/unit/job/test_base.py::Test_AsyncJob::test_created
tests/unit/job/test_base.py::Test_AsyncJob::test_ended
tests/unit/job/test_base.py::Test_AsyncJob::test_started
/tmpfs/src/github/python-bigquery/tests/unit/job/test_base.py:334: DeprecationWarning: datetime.datetime.utcnow() is deprecated and scheduled for removal in a future version. Use timezone-aware objects to represent datetimes in UTC: datetime.datetime.now(datetime.UTC).
now = datetime.datetime.utcnow().replace(

tests/unit/job/test_base.py: 5 warnings
tests/unit/job/test_copy.py: 6 warnings
tests/unit/job/test_extract.py: 6 warnings
tests/unit/job/test_load.py: 14 warnings
tests/unit/job/test_query.py: 27 warnings
tests/unit/test__job_helpers.py: 2 warnings
tests/unit/test_client.py: 123 warnings
tests/unit/test_create_dataset.py: 16 warnings
tests/unit/test_delete_dataset.py: 10 warnings
tests/unit/test_job_retry.py: 11 warnings
tests/unit/test_list_datasets.py: 4 warnings
tests/unit/test_list_jobs.py: 8 warnings
tests/unit/test_list_models.py: 9 warnings
tests/unit/test_list_projects.py: 4 warnings
tests/unit/test_list_routines.py: 9 warnings
tests/unit/test_list_tables.py: 13 warnings
/tmpfs/src/github/python-bigquery/.nox/unit_noextras-3-12/lib/python3.12/site-packages/google/api_core/retry/retry_unary.py:281: UserWarning: Using the synchronous google.api_core.retry.Retry with asynchronous calls may lead to unexpected results. Please use google.api_core.retry_async.AsyncRetry instead.
warnings.warn(_ASYNC_RETRY_WARNING)

tests/unit/job/test_copy.py: 11 warnings
tests/unit/job/test_extract.py: 9 warnings
tests/unit/job/test_load.py: 18 warnings
tests/unit/job/test_query.py: 34 warnings
/tmpfs/src/github/python-bigquery/tests/unit/job/helpers.py:109: DeprecationWarning: datetime.datetime.utcfromtimestamp() is deprecated and scheduled for removal in a future version. Use timezone-aware objects to represent datetimes in UTC: datetime.datetime.fromtimestamp(timestamp, datetime.UTC).
self.WHEN = datetime.datetime.utcfromtimestamp(self.WHEN_TS).replace(tzinfo=UTC)

tests/unit/test_client.py::TestClient::test_insert_rows_w_list_of_dictionaries
/tmpfs/src/github/python-bigquery/tests/unit/test_client.py:5700: DeprecationWarning: datetime.datetime.utcfromtimestamp() is deprecated and scheduled for removal in a future version. Use timezone-aware objects to represent datetimes in UTC: datetime.datetime.fromtimestamp(timestamp, datetime.UTC).
WHEN = datetime.datetime.utcfromtimestamp(WHEN_TS).replace(tzinfo=UTC)

tests/unit/test_client.py::TestClient::test_insert_rows_w_schema
/tmpfs/src/github/python-bigquery/tests/unit/test_client.py:5639: DeprecationWarning: datetime.datetime.utcfromtimestamp() is deprecated and scheduled for removal in a future version. Use timezone-aware objects to represent datetimes in UTC: datetime.datetime.fromtimestamp(timestamp, datetime.UTC).
WHEN = datetime.datetime.utcfromtimestamp(WHEN_TS).replace(tzinfo=UTC)

tests/unit/test_client.py::TestClient::test_list_rows
/tmpfs/src/github/python-bigquery/tests/unit/test_client.py:6752: DeprecationWarning: datetime.datetime.utcfromtimestamp() is deprecated and scheduled for removal in a future version. Use timezone-aware objects to represent datetimes in UTC: datetime.datetime.fromtimestamp(timestamp, datetime.UTC).
WHEN = datetime.datetime.utcfromtimestamp(WHEN_TS / 1e6).replace(tzinfo=UTC)

tests/unit/test_dataset.py::TestDataset::test_from_api_repr_bare
tests/unit/test_dataset.py::TestDataset::test_from_api_repr_missing_identity
tests/unit/test_dataset.py::TestDataset::test_from_api_repr_w_properties
/tmpfs/src/github/python-bigquery/tests/unit/test_dataset.py:668: DeprecationWarning: datetime.datetime.utcfromtimestamp() is deprecated and scheduled for removal in a future version. Use timezone-aware objects to represent datetimes in UTC: datetime.datetime.fromtimestamp(timestamp, datetime.UTC).
self.WHEN = datetime.datetime.utcfromtimestamp(self.WHEN_TS).replace(tzinfo=UTC)

tests/unit/test_query.py::Test_ScalarQueryParameter::test_to_api_repr_w_datetime_datetime
/tmpfs/src/github/python-bigquery/tests/unit/test_query.py:655: DeprecationWarning: datetime.datetime.utcnow() is deprecated and scheduled for removal in a future version. Use timezone-aware objects to represent datetimes in UTC: datetime.datetime.now(datetime.UTC).
now = datetime.datetime.utcnow()

tests/unit/test_query.py::Test_ScalarQueryParameter::test_to_api_repr_w_datetime_string
/tmpfs/src/github/python-bigquery/tests/unit/test_query.py:669: DeprecationWarning: datetime.datetime.utcnow() is deprecated and scheduled for removal in a future version. Use timezone-aware objects to represent datetimes in UTC: datetime.datetime.now(datetime.UTC).
now = datetime.datetime.utcnow()

tests/unit/test_query.py::Test_ScalarQueryParameter::test_to_api_repr_w_timestamp_micros
/tmpfs/src/github/python-bigquery/tests/unit/test_query.py:642: DeprecationWarning: datetime.datetime.utcnow() is deprecated and scheduled for removal in a future version. Use timezone-aware objects to represent datetimes in UTC: datetime.datetime.now(datetime.UTC).
now = datetime.datetime.utcnow()

tests/unit/test_query.py::Test_RangeQueryParameter::test_to_api_repr_w_datetime_datetime
/tmpfs/src/github/python-bigquery/tests/unit/test_query.py:1052: DeprecationWarning: datetime.datetime.utcnow() is deprecated and scheduled for removal in a future version. Use timezone-aware objects to represent datetimes in UTC: datetime.datetime.now(datetime.UTC).
now = datetime.datetime.utcnow()

tests/unit/test_query.py::Test_RangeQueryParameter::test_to_api_repr_w_timestamp_timestamp
/tmpfs/src/github/python-bigquery/tests/unit/test_query.py:1092: DeprecationWarning: datetime.datetime.utcnow() is deprecated and scheduled for removal in a future version. Use timezone-aware objects to represent datetimes in UTC: datetime.datetime.now(datetime.UTC).
now = datetime.datetime.utcnow()

tests/unit/test_table.py::TestTable::test_from_api_repr_bare
tests/unit/test_table.py::TestTable::test_from_api_repr_missing_identity
tests/unit/test_table.py::TestTable::test_from_api_repr_w_partial_streamingbuffer
tests/unit/test_table.py::TestTable::test_from_api_repr_w_properties
tests/unit/test_table.py::TestTable::test_from_api_with_encryption
/tmpfs/src/github/python-bigquery/tests/unit/test_table.py:395: DeprecationWarning: datetime.datetime.utcfromtimestamp() is deprecated and scheduled for removal in a future version. Use timezone-aware objects to represent datetimes in UTC: datetime.datetime.fromtimestamp(timestamp, datetime.UTC).
self.WHEN = datetime.datetime.utcfromtimestamp(self.WHEN_TS).replace(tzinfo=UTC)

tests/unit/test_table.py::TestTable::test_to_api_repr_w_unsetting_expiration
/tmpfs/src/github/python-bigquery/tests/unit/test_table.py:1170: PendingDeprecationWarning: This method will be deprecated in future versions. Please use Table.time_partitioning.expiration_ms instead.
table.partition_expiration = None

tests/unit/test_table.py::TestTableListItem::test_ctor
/tmpfs/src/github/python-bigquery/tests/unit/test_table.py:1553: DeprecationWarning: datetime.datetime.utcfromtimestamp() is deprecated and scheduled for removal in a future version. Use timezone-aware objects to represent datetimes in UTC: datetime.datetime.fromtimestamp(timestamp, datetime.UTC).
self.WHEN = datetime.datetime.utcfromtimestamp(self.WHEN_TS).replace(tzinfo=UTC)

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=========================== short test summary info ============================
FAILED tests/unit/job/test_base.py::Test_AsyncJob::test_result_default_wo_state
FAILED tests/unit/job/test_base.py::Test_AsyncJob::test_result_w_retry_wo_state
FAILED tests/unit/job/test_query.py::TestQueryJob::test_result_begin_job_if_not_exist
FAILED tests/unit/job/test_query.py::TestQueryJob::test_result_reloads_job_state_until_done
FAILED tests/unit/job/test_query.py::TestQueryJob::test_result_w_custom_retry
FAILED tests/unit/job/test_query.py::TestQueryJob::test_result_w_page_size - ...
FAILED tests/unit/job/test_query.py::TestQueryJob::test_result_w_timeout_doesnt_raise
FAILED tests/unit/job/test_query.py::TestQueryJob::test_result_w_timeout_raises_concurrent_futures_timeout
FAILED tests/unit/job/test_query.py::TestQueryJob::test_result_with_done_job_calls_get_query_results
FAILED tests/unit/job/test_query.py::TestQueryJob::test_result_with_max_results
FAILED tests/unit/test_client.py::TestClient::test_create_routine_w_conflict_exists_ok
FAILED tests/unit/test_client.py::TestClient::test_create_table_alreadyexists_w_exists_ok_true
FAILED tests/unit/test_client.py::TestClient::test_list_rows_w_start_index_w_page_size
FAILED tests/unit/test_client.py::TestClient::test_query_and_wait_w_page_size_multiple_requests
FAILED tests/unit/test_create_dataset.py::test_create_dataset_alreadyexists_w_exists_ok_true
FAILED tests/unit/test_job_retry.py::test_retry_connection_error_with_default_retries_and_successful_first_job
FAILED tests/unit/test_job_retry.py::test_query_retry_with_default_retry_and_ambiguous_errors_only_retries_with_failed_job
17 failed, 1989 passed, 283 skipped, 364 warnings in 75.09s (0:01:15)
nox > Command py.test --quiet '-W default::PendingDeprecationWarning' --cov=google/cloud/bigquery --cov=tests/unit --cov-append --cov-config=.coveragerc --cov-report= --cov-fail-under=0 tests/unit failed with exit code 1
nox > Session unit_noextras-3.12 failed.
nox > Running session unit-3.7
nox > Creating virtual environment (virtualenv) using python3.7 in .nox/unit-3-7
nox > python -m pip install pytest google-cloud-testutils pytest-cov freezegun -c /tmpfs/src/github/python-bigquery/testing/constraints-3.7.txt
nox > python -m pip install -e '.[all]' -c /tmpfs/src/github/python-bigquery/testing/constraints-3.7.txt
nox > Command python -m pip install -e '.[all]' -c /tmpfs/src/github/python-bigquery/testing/constraints-3.7.txt failed with exit code 1:
Obtaining file:///tmpfs/src/github/python-bigquery
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Collecting google-api-core[grpc]@ git+https://github.com/googleapis/python-api-core.git@main (from google-cloud-bigquery==3.26.0)
Cloning https://github.com/googleapis/python-api-core.git (to revision main) to /tmp/pip-install-yl28ku8r/google-api-core_b47c40d2157148c1a7f7a4caf19c37bf
Running command git clone --filter=blob:none --quiet https://github.com/googleapis/python-api-core.git /tmp/pip-install-yl28ku8r/google-api-core_b47c40d2157148c1a7f7a4caf19c37bf
Resolved https://github.com/googleapis/python-api-core.git to commit 0d5ed37c96f9b40bccae98e228163a88abeb1763
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
INFO: pip is looking at multiple versions of google-cloud-bigquery[all] to determine which version is compatible with other requirements. This could take a while.
ERROR: Could not find a version that satisfies the requirement google-api-core[grpc] 2.21.0 (from google-cloud-bigquery[all]) (from versions: 0.1.0, 0.1.1, 0.1.2, 0.1.3, 0.1.4, 1.0.0, 1.1.0, 1.1.1, 1.1.2, 1.2.0, 1.2.1, 1.3.0, 1.4.0, 1.4.1, 1.5.0, 1.5.1, 1.5.2, 1.6.0a1, 1.6.0, 1.7.0, 1.8.0, 1.8.1, 1.8.2, 1.9.0, 1.10.0, 1.11.0, 1.11.1, 1.12.0, 1.13.0, 1.14.0, 1.14.1, 1.14.2, 1.14.3, 1.15.0, 1.16.0, 1.17.0, 1.18.0, 1.19.0, 1.19.1, 1.20.0, 1.20.1, 1.21.0, 1.22.0, 1.22.1, 1.22.2, 1.22.3, 1.22.4, 1.23.0, 1.24.0, 1.24.1, 1.25.0, 1.25.1, 1.26.0.dev0, 1.26.0, 1.26.1, 1.26.2, 1.26.3, 1.27.0, 1.28.0, 1.29.0, 1.30.0, 1.31.0, 1.31.1, 1.31.2, 1.31.3, 1.31.4, 1.31.5, 1.31.6, 1.32.0, 1.33.0b1, 1.33.0, 1.33.1, 1.33.2, 1.34.0, 1.34.1rc1, 1.34.1, 2.0.0b1, 2.0.0, 2.0.1, 2.1.0, 2.1.1, 2.2.0, 2.2.1, 2.2.2, 2.3.0, 2.3.1, 2.3.2, 2.4.0, 2.5.0, 2.6.0, 2.6.1, 2.7.0, 2.7.1, 2.7.2, 2.7.3, 2.8.0, 2.8.1, 2.8.2, 2.9.0, 2.10.0, 2.10.1, 2.10.2, 2.11.0rc1, 2.11.0, 2.11.1rc1, 2.11.1, 2.12.0.dev0, 2.12.0.dev1, 2.12.0rc1, 2.12.0, 2.13.0rc1, 2.13.0, 2.13.1, 2.14.0, 2.15.0rc1, 2.15.0, 2.16.0rc0, 2.16.0, 2.16.1, 2.16.2, 2.17.0rc0, 2.17.0, 2.17.1rc1, 2.17.1, 2.18.0rc0, 2.18.0, 2.19.0rc0, 2.19.0, 2.19.1rc0, 2.19.1, 2.19.2, 2.20.0rc0, 2.20.0, 2.20.1rc0, 2.21.0rc0, 2.21.0)
ERROR: No matching distribution found for google-api-core[grpc] 2.21.0

[notice] A new release of pip is available: 23.1.2 -> 24.0
[notice] To update, run: pip install --upgrade pip
nox > Session unit-3.7 failed.
nox > Running session unit-3.8
nox > Creating virtual environment (virtualenv) using python3.8 in .nox/unit-3-8
nox > python -m pip install pytest google-cloud-testutils pytest-cov freezegun -c /tmpfs/src/github/python-bigquery/testing/constraints-3.8.txt
nox > python -m pip install -e '.[all]' -c /tmpfs/src/github/python-bigquery/testing/constraints-3.8.txt
nox > python -m pip freeze
asttokens==2.4.1
attrs==24.2.0
backcall==0.2.0
bigquery-magics==0.4.0
cachetools==5.5.0
certifi==2024.8.30
charset-normalizer==3.4.0
click==8.1.7
click-plugins==1.1.1
cligj==0.7.2
comm==0.2.2
coverage==7.6.1
db-dtypes==1.3.0
debugpy==1.8.7
decorator==5.1.1
Deprecated==1.2.14
exceptiongroup==1.2.2
executing==2.1.0
fiona==1.10.1
freezegun==1.5.1
geopandas==0.13.2
google-api-core @ git+https://github.com/googleapis/python-api-core.git@0d5ed37c96f9b40bccae98e228163a88abeb1763
google-auth==2.35.0
google-auth-oauthlib==1.2.1
# Editable Git install with no remote (google-cloud-bigquery==3.26.0)
-e /tmpfs/src/github/python-bigquery
google-cloud-bigquery-storage==2.27.0
google-cloud-core==2.4.1
google-cloud-testutils==1.4.0
google-crc32c==1.5.0
google-resumable-media==2.7.2
googleapis-common-protos==1.65.0
grpcio==1.47.0
grpcio-status==1.47.0
idna==3.10
importlib_metadata==8.4.0
iniconfig==2.0.0
ipykernel==6.29.5
ipython==8.12.3
ipywidgets==8.1.5
jedi==0.19.1
jupyter_client==8.6.3
jupyter_core==5.7.2
jupyterlab_widgets==3.0.13
matplotlib-inline==0.1.7
nest-asyncio==1.6.0
numpy==1.24.4
oauthlib==3.2.2
opentelemetry-api==1.27.0
opentelemetry-instrumentation==0.48b0
opentelemetry-sdk==1.27.0
opentelemetry-semantic-conventions==0.48b0
packaging==24.1
pandas==1.2.0
parso==0.8.4
pexpect==4.9.0
pickleshare==0.7.5
platformdirs==4.3.6
pluggy==1.5.0
prompt_toolkit==3.0.48
proto-plus==1.24.0
protobuf==5.28.2
psutil==6.1.0
ptyprocess==0.7.0
pure_eval==0.2.3
pyarrow==17.0.0
pyasn1==0.6.1
pyasn1_modules==0.4.1
pydata-google-auth==1.8.2
Pygments==2.18.0
pyproj==3.5.0
pytest==8.3.3
pytest-cov==5.0.0
python-dateutil==2.9.0.post0
pytz==2024.2
pyzmq==26.2.0
requests==2.32.3
requests-oauthlib==2.0.0
rsa==4.9
shapely==2.0.6
six==1.16.0
stack-data==0.6.3
tomli==2.0.2
tornado==6.4.1
tqdm==4.66.5
traitlets==5.14.3
typing_extensions==4.12.2
urllib3==2.2.3
wcwidth==0.2.13
widgetsnbextension==4.0.13
wrapt==1.16.0
zipp==3.20.2
nox > py.test --quiet '-W default::PendingDeprecationWarning' --cov=google/cloud/bigquery --cov=tests/unit --cov-append --cov-config=.coveragerc --cov-report= --cov-fail-under=0 tests/unit
........................................................................ [ 2%]
..................................F.F................................... [ 5%]
........................................................................ [ 8%]
........................................................................ [ 11%]
........................................................F..F.F.FFFF.F... [ 14%]
........................................................................ [ 17%]
........................................................................ [ 20%]
........................................................................ [ 23%]
........................................................................ [ 26%]
........................................................................ [ 29%]
........................................................................ [ 32%]
........................................................................ [ 35%]
........................................................................ [ 38%]
................................s.....s..........s...................... [ 41%]
.....................F..F............................................... [ 44%]
.......................................................F..............F. [ 47%]
........................................................................ [ 50%]
................................................F....................... [ 53%]
........................................................................ [ 56%]
........................................................................ [ 59%]
........................................................................ [ 62%]
........................................................................ [ 65%]
...............................................................FF....... [ 68%]
...................................................F.F.................. [ 71%]
........................................................................ [ 74%]
........................................................................ [ 77%]
........................................................................ [ 80%]
........................................................................ [ 83%]
........................................................................ [ 86%]
........................................................................ [ 89%]
........................................................................ [ 92%]
........................................................................ [ 95%]
........................................................................ [ 98%]
................................. [100%]
=================================== FAILURES ===================================
__________________ Test_AsyncJob.test_result_default_wo_state __________________

self =

def test_result_default_wo_state(self):
from google.cloud.bigquery.retry import DEFAULT_GET_JOB_TIMEOUT

begun_job_resource = _make_job_resource(
job_id=self.JOB_ID, project_id=self.PROJECT, location="US", started=True
)
done_job_resource = _make_job_resource(
job_id=self.JOB_ID,
project_id=self.PROJECT,
location="US",
started=True,
ended=True,
)
conn = make_connection(
_make_retriable_exception(),
begun_job_resource,
done_job_resource,
)
client = _make_client(project=self.PROJECT, connection=conn)
job = self._make_one(self.JOB_ID, client)

self.assertIs(job.result(retry=polling.DEFAULT_RETRY), job)

begin_call = mock.call(
method="POST",
path=f"/projects/{self.PROJECT}/jobs",
data={"jobReference": {"jobId": self.JOB_ID, "projectId": self.PROJECT}},
timeout=None,
)
reload_call = mock.call(
method="GET",
path=f"/projects/{self.PROJECT}/jobs/{self.JOB_ID}",
query_params={
"projection": "full",
"location": "US",
},
timeout=DEFAULT_GET_JOB_TIMEOUT,
)
> conn.api_request.assert_has_calls([begin_call, begin_call, reload_call])

tests/unit/job/test_base.py:1060:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self =
calls = [call(method='POST', path='/projects/test-project-123/jobs', data={'jobReference': {'jobId': 'job-id', 'projectId': 't...T', path='/projects/test-project-123/jobs/job-id', query_params={'projection': 'full', 'location': 'US'}, timeout=128)]
any_order = False

def assert_has_calls(self, calls, any_order=False):
"""assert the mock has been called with the specified calls.
The `mock_calls` list is checked for the calls.

If `any_order` is False (the default) then the calls must be
sequential. There can be extra calls before or after the
specified calls.

If `any_order` is True then the calls can be in any order, but
they must all appear in `mock_calls`."""
expected = [self._call_matcher(c) for c in calls]
cause = next((e for e in expected if isinstance(e, Exception)), None)
all_calls = _CallList(self._call_matcher(c) for c in self.mock_calls)
if not any_order:
if expected not in all_calls:
if cause is None:
problem = 'Calls not found.'
else:
problem = ('Error processing expected calls.\n'
'Errors: {}').format(
[e if isinstance(e, Exception) else None
for e in expected])
> raise AssertionError(
f'{problem}\n'
f'Expected: {_CallList(calls)}'
f'{self._calls_repr(prefix="Actual").rstrip(".")}'
) from cause
E AssertionError: Calls not found.
E Expected: [call(method='POST', path='/projects/test-project-123/jobs', data={'jobReference': {'jobId': 'job-id', 'projectId': 'test-project-123'}}, timeout=None),
E call(method='POST', path='/projects/test-project-123/jobs', data={'jobReference': {'jobId': 'job-id', 'projectId': 'test-project-123'}}, timeout=None),
E call(method='GET', path='/projects/test-project-123/jobs/job-id', query_params={'projection': 'full', 'location': 'US'}, timeout=128)]
E Actual: [call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='POST', path='/projects/test-project-123/jobs', data={'jobReference': {'jobId': 'job-id', 'projectId': 'test-project-123'}}, timeout=None),
E call(method='POST', path='/projects/test-project-123/jobs', data={'jobReference': {'jobId': 'job-id', 'projectId': 'test-project-123'}}, timeout=None),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/test-project-123/jobs/job-id', query_params={'projection': 'full', 'location': 'US'}, timeout=128)]

/usr/local/lib/python3.8/unittest/mock.py:950: AssertionError
__________________ Test_AsyncJob.test_result_w_retry_wo_state __________________

self =

def test_result_w_retry_wo_state(self):
from google.cloud.bigquery.retry import DEFAULT_GET_JOB_TIMEOUT

begun_job_resource = _make_job_resource(
job_id=self.JOB_ID, project_id=self.PROJECT, location="EU", started=True
)
done_job_resource = _make_job_resource(
job_id=self.JOB_ID,
project_id=self.PROJECT,
location="EU",
started=True,
ended=True,
)
conn = make_connection(
exceptions.NotFound("not normally retriable"),
begun_job_resource,
exceptions.NotFound("not normally retriable"),
done_job_resource,
)
client = _make_client(project=self.PROJECT, connection=conn)
job = self._make_one(
self._job_reference(self.JOB_ID, self.PROJECT, "EU"), client
)
custom_predicate = mock.Mock()
custom_predicate.return_value = True
custom_retry = google.api_core.retry.Retry(
predicate=custom_predicate,
initial=0.001,
maximum=0.001,
deadline=0.1,
)
self.assertIs(job.result(retry=custom_retry), job)

begin_call = mock.call(
method="POST",
path=f"/projects/{self.PROJECT}/jobs",
data={
"jobReference": {
"jobId": self.JOB_ID,
"projectId": self.PROJECT,
"location": "EU",
}
},
timeout=None,
)
reload_call = mock.call(
method="GET",
path=f"/projects/{self.PROJECT}/jobs/{self.JOB_ID}",
query_params={
"projection": "full",
"location": "EU",
},
timeout=DEFAULT_GET_JOB_TIMEOUT,
)
> conn.api_request.assert_has_calls(
[begin_call, begin_call, reload_call, reload_call]
)

tests/unit/job/test_base.py:1116:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self =
calls = [call(method='POST', path='/projects/test-project-123/jobs', data={'jobReference': {'jobId': 'job-id', 'projectId': 't...T', path='/projects/test-project-123/jobs/job-id', query_params={'projection': 'full', 'location': 'EU'}, timeout=128)]
any_order = False

def assert_has_calls(self, calls, any_order=False):
"""assert the mock has been called with the specified calls.
The `mock_calls` list is checked for the calls.

If `any_order` is False (the default) then the calls must be
sequential. There can be extra calls before or after the
specified calls.

If `any_order` is True then the calls can be in any order, but
they must all appear in `mock_calls`."""
expected = [self._call_matcher(c) for c in calls]
cause = next((e for e in expected if isinstance(e, Exception)), None)
all_calls = _CallList(self._call_matcher(c) for c in self.mock_calls)
if not any_order:
if expected not in all_calls:
if cause is None:
problem = 'Calls not found.'
else:
problem = ('Error processing expected calls.\n'
'Errors: {}').format(
[e if isinstance(e, Exception) else None
for e in expected])
> raise AssertionError(
f'{problem}\n'
f'Expected: {_CallList(calls)}'
f'{self._calls_repr(prefix="Actual").rstrip(".")}'
) from cause
E AssertionError: Calls not found.
E Expected: [call(method='POST', path='/projects/test-project-123/jobs', data={'jobReference': {'jobId': 'job-id', 'projectId': 'test-project-123', 'location': 'EU'}}, timeout=None),
E call(method='POST', path='/projects/test-project-123/jobs', data={'jobReference': {'jobId': 'job-id', 'projectId': 'test-project-123', 'location': 'EU'}}, timeout=None),
E call(method='GET', path='/projects/test-project-123/jobs/job-id', query_params={'projection': 'full', 'location': 'EU'}, timeout=128),
E call(method='GET', path='/projects/test-project-123/jobs/job-id', query_params={'projection': 'full', 'location': 'EU'}, timeout=128)]
E Actual: [call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='POST', path='/projects/test-project-123/jobs', data={'jobReference': {'jobId': 'job-id', 'projectId': 'test-project-123', 'location': 'EU'}}, timeout=None),
E call(method='POST', path='/projects/test-project-123/jobs', data={'jobReference': {'jobId': 'job-id', 'projectId': 'test-project-123', 'location': 'EU'}}, timeout=None),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/test-project-123/jobs/job-id', query_params={'projection': 'full', 'location': 'EU'}, timeout=128),
E call(method='GET', path='/projects/test-project-123/jobs/job-id', query_params={'projection': 'full', 'location': 'EU'}, timeout=128)]

/usr/local/lib/python3.8/unittest/mock.py:950: AssertionError
_______________ TestQueryJob.test_result_begin_job_if_not_exist ________________

self =

def test_result_begin_job_if_not_exist(self):
begun_resource = self._make_resource()
query_running_resource = {
"jobComplete": True,
"jobReference": {
"projectId": self.PROJECT,
"jobId": self.JOB_ID,
"location": "US",
},
"schema": {"fields": [{"name": "col1", "type": "STRING"}]},
"status": {"state": "RUNNING"},
}
query_done_resource = {
"jobComplete": True,
"jobReference": {
"projectId": self.PROJECT,
"jobId": self.JOB_ID,
"location": "US",
},
"schema": {"fields": [{"name": "col1", "type": "STRING"}]},
"status": {"state": "DONE"},
}
done_resource = copy.deepcopy(begun_resource)
done_resource["status"] = {"state": "DONE"}
connection = make_connection(
begun_resource,
query_running_resource,
query_done_resource,
done_resource,
)
client = _make_client(project=self.PROJECT, connection=connection)
job = self._make_one(self.JOB_ID, self.QUERY, client)
job._properties["jobReference"]["location"] = "US"

job.result()

create_job_call = mock.call(
method="POST",
path=f"/projects/{self.PROJECT}/jobs",
data={
"jobReference": {
"jobId": self.JOB_ID,
"projectId": self.PROJECT,
"location": "US",
},
"configuration": {
"query": {"useLegacySql": False, "query": self.QUERY},
},
},
timeout=None,
)
reload_call = mock.call(
method="GET",
path=f"/projects/{self.PROJECT}/jobs/{self.JOB_ID}",
query_params={"projection": "full", "location": "US"},
timeout=DEFAULT_GET_JOB_TIMEOUT,
)
get_query_results_call = mock.call(
method="GET",
path=f"/projects/{self.PROJECT}/queries/{self.JOB_ID}",
query_params={
"maxResults": 0,
"location": "US",
},
timeout=None,
)

> connection.api_request.assert_has_calls(
[
# Make sure we start a job that hasn't started yet. See:
# https://github.com/googleapis/python-bigquery/issues/1940
create_job_call,
reload_call,
get_query_results_call,
reload_call,
]
)

tests/unit/job/test_query.py:1109:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self =
calls = [call(method='POST', path='/projects/project/jobs', data={'jobReference': {'jobId': 'JOB_ID', 'projectId': 'project', ...ethod='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'US'}, timeout=128)]
any_order = False

def assert_has_calls(self, calls, any_order=False):
"""assert the mock has been called with the specified calls.
The `mock_calls` list is checked for the calls.

If `any_order` is False (the default) then the calls must be
sequential. There can be extra calls before or after the
specified calls.

If `any_order` is True then the calls can be in any order, but
they must all appear in `mock_calls`."""
expected = [self._call_matcher(c) for c in calls]
cause = next((e for e in expected if isinstance(e, Exception)), None)
all_calls = _CallList(self._call_matcher(c) for c in self.mock_calls)
if not any_order:
if expected not in all_calls:
if cause is None:
problem = 'Calls not found.'
else:
problem = ('Error processing expected calls.\n'
'Errors: {}').format(
[e if isinstance(e, Exception) else None
for e in expected])
> raise AssertionError(
f'{problem}\n'
f'Expected: {_CallList(calls)}'
f'{self._calls_repr(prefix="Actual").rstrip(".")}'
) from cause
E AssertionError: Calls not found.
E Expected: [call(method='POST', path='/projects/project/jobs', data={'jobReference': {'jobId': 'JOB_ID', 'projectId': 'project', 'location': 'US'}, 'configuration': {'query': {'useLegacySql': False, 'query': 'select count(*) from persons'}}}, timeout=None),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'US'}, timeout=128),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'US'}, timeout=None),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'US'}, timeout=128)]
E Actual: [call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='POST', path='/projects/project/jobs', data={'jobReference': {'jobId': 'JOB_ID', 'projectId': 'project', 'location': 'US'}, 'configuration': {'query': {'useLegacySql': False, 'query': 'select count(*) from persons'}}}, timeout=None),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'US'}, timeout=128),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'US'}, timeout=None),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'US'}, timeout=128)]

/usr/local/lib/python3.8/unittest/mock.py:950: AssertionError
____________ TestQueryJob.test_result_reloads_job_state_until_done _____________

self =

def test_result_reloads_job_state_until_done(self):
"""Verify that result() doesn't return until state == 'DONE'.

This test verifies correctness for a possible sequence of API responses
that might cause internal customer issue b/332850329.
"""
from google.cloud.bigquery.table import RowIterator

query_resource = {
"jobComplete": False,
"jobReference": {
"projectId": self.PROJECT,
"jobId": self.JOB_ID,
"location": "EU",
},
}
query_resource_done = {
"jobComplete": True,
"jobReference": {
"projectId": self.PROJECT,
"jobId": self.JOB_ID,
"location": "EU",
},
"schema": {"fields": [{"name": "col1", "type": "STRING"}]},
"totalRows": "2",
"queryId": "abc-def",
}
job_resource = self._make_resource(started=True, location="EU")
job_resource_done = self._make_resource(started=True, ended=True, location="EU")
job_resource_done["configuration"]["query"]["destinationTable"] = {
"projectId": "dest-project",
"datasetId": "dest_dataset",
"tableId": "dest_table",
}
query_page_resource = {
# Explicitly set totalRows to be different from the initial
# response to test update during iteration.
"totalRows": "1",
"pageToken": None,
"rows": [{"f": [{"v": "abc"}]}],
}
conn = make_connection(
# QueryJob.result() makes a pair of jobs.get & jobs.getQueryResults
# REST API calls each iteration to determine if the job has finished
# or not.
#
# jobs.get (https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs/get)
# is necessary to make sure the job has really finished via
# `Job.status.state == "DONE"` and to get necessary properties for
# `RowIterator` like the destination table.
#
# jobs.getQueryResults
# (https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs/getQueryResults)
# with maxResults == 0 is technically optional,
# but it hangs up to 10 seconds until the job has finished. This
# makes sure we can know when the query has finished as close as
# possible to when the query finishes. It also gets properties
# necessary for `RowIterator` that isn't available on the job
# resource such as the schema
# (https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs/getQueryResults#body.GetQueryResultsResponse.FIELDS.schema)
# of the results.
job_resource,
query_resource,
# The query wasn't finished in the last call to jobs.get, so try
# again with a call to both jobs.get & jobs.getQueryResults.
job_resource,
query_resource_done,
# Even though, the previous jobs.getQueryResults response says
# the job is complete, we haven't downloaded the full job status
# yet.
#
# Important: per internal issue 332850329, this reponse has
# `Job.status.state = "RUNNING"`. This ensures we are protected
# against possible eventual consistency issues where
# `jobs.getQueryResults` says jobComplete == True, but our next
# call to `jobs.get` still doesn't have
# `Job.status.state == "DONE"`.
job_resource,
# Try again until `Job.status.state == "DONE"`.
#
# Note: the call to `jobs.getQueryResults` is missing here as
# an optimization. We already received a "completed" response, so
# we won't learn anything new by calling that API again.
job_resource,
job_resource_done,
# When we iterate over the `RowIterator` we return from
# `QueryJob.result()`, we make additional calls to
# `jobs.getQueryResults` but this time allowing the actual rows
# to be returned as well.
query_page_resource,
)
client = _make_client(self.PROJECT, connection=conn)
job = self._get_target_class().from_api_repr(job_resource, client)

result = job.result()

self.assertIsInstance(result, RowIterator)
self.assertEqual(result.total_rows, 2)
rows = list(result)
self.assertEqual(len(rows), 1)
self.assertEqual(rows[0].col1, "abc")
self.assertEqual(result.job_id, self.JOB_ID)
self.assertEqual(result.location, "EU")
self.assertEqual(result.project, self.PROJECT)
self.assertEqual(result.query_id, "abc-def")
# Test that the total_rows property has changed during iteration, based
# on the response from tabledata.list.
self.assertEqual(result.total_rows, 1)

query_results_path = f"/projects/{self.PROJECT}/queries/{self.JOB_ID}"
query_results_call = mock.call(
method="GET",
path=query_results_path,
query_params={"maxResults": 0, "location": "EU"},
timeout=None,
)
reload_call = mock.call(
method="GET",
path=f"/projects/{self.PROJECT}/jobs/{self.JOB_ID}",
query_params={"projection": "full", "location": "EU"},
timeout=DEFAULT_GET_JOB_TIMEOUT,
)
query_page_call = mock.call(
method="GET",
path=query_results_path,
query_params={
"fields": _LIST_ROWS_FROM_QUERY_RESULTS_FIELDS,
"location": "EU",
"formatOptions.useInt64Timestamp": True,
},
timeout=None,
)
# Ensure that we actually made the expected API calls in the sequence
# we thought above at the make_connection() call above.
#
# Note: The responses from jobs.get and jobs.getQueryResults can be
# deceptively similar, so this check ensures we actually made the
# requests we expected.
> conn.api_request.assert_has_calls(
[
# jobs.get & jobs.getQueryResults because the job just started.
reload_call,
query_results_call,
# jobs.get & jobs.getQueryResults because the query is still
# running.
reload_call,
query_results_call,
# We got a jobComplete response from the most recent call to
# jobs.getQueryResults, so now call jobs.get until we get
# `Jobs.status.state == "DONE"`. This tests a fix for internal
# issue b/332850329.
reload_call,
reload_call,
reload_call,
# jobs.getQueryResults without `maxResults` set to download
# the rows as we iterate over the `RowIterator`.
query_page_call,
]
)

tests/unit/job/test_query.py:999:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self =
calls = [call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'EU'}, timeo...='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'EU'}, timeout=128), ...]
any_order = False

def assert_has_calls(self, calls, any_order=False):
"""assert the mock has been called with the specified calls.
The `mock_calls` list is checked for the calls.

If `any_order` is False (the default) then the calls must be
sequential. There can be extra calls before or after the
specified calls.

If `any_order` is True then the calls can be in any order, but
they must all appear in `mock_calls`."""
expected = [self._call_matcher(c) for c in calls]
cause = next((e for e in expected if isinstance(e, Exception)), None)
all_calls = _CallList(self._call_matcher(c) for c in self.mock_calls)
if not any_order:
if expected not in all_calls:
if cause is None:
problem = 'Calls not found.'
else:
problem = ('Error processing expected calls.\n'
'Errors: {}').format(
[e if isinstance(e, Exception) else None
for e in expected])
> raise AssertionError(
f'{problem}\n'
f'Expected: {_CallList(calls)}'
f'{self._calls_repr(prefix="Actual").rstrip(".")}'
) from cause
E AssertionError: Calls not found.
E Expected: [call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'EU'}, timeout=128),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'EU'}, timeout=None),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'EU'}, timeout=128),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'EU'}, timeout=None),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'EU'}, timeout=128),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'EU'}, timeout=128),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'EU'}, timeout=128),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'fields': 'jobReference,totalRows,pageToken,rows', 'location': 'EU', 'formatOptions.useInt64Timestamp': True}, timeout=None)]
E Actual: [call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'EU'}, timeout=128),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'EU'}, timeout=None),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'EU'}, timeout=128),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'EU'}, timeout=None),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'EU'}, timeout=128),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'EU'}, timeout=128),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'EU'}, timeout=128),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(timeout=None, method='GET', path='/projects/project/queries/JOB_ID', query_params={'fields': 'jobReference,totalRows,pageToken,rows', 'location': 'EU', 'formatOptions.useInt64Timestamp': True})]

/usr/local/lib/python3.8/unittest/mock.py:950: AssertionError
___________________ TestQueryJob.test_result_w_custom_retry ____________________

self =

def test_result_w_custom_retry(self):
from google.cloud.bigquery.table import RowIterator

query_resource = {
"jobComplete": False,
"jobReference": {"projectId": self.PROJECT, "jobId": self.JOB_ID},
}
query_resource_done = {
"jobComplete": True,
"jobReference": {"projectId": self.PROJECT, "jobId": self.JOB_ID},
"schema": {"fields": [{"name": "col1", "type": "STRING"}]},
"totalRows": "2",
}
job_resource = self._make_resource(started=True, location="asia-northeast1")
job_resource_done = self._make_resource(
started=True, ended=True, location="asia-northeast1"
)
job_resource_done["configuration"]["query"]["destinationTable"] = {
"projectId": "dest-project",
"datasetId": "dest_dataset",
"tableId": "dest_table",
}

connection = make_connection(
# Also, for each API request, raise an exception that we know can
# be retried. Because of this, for each iteration we do:
# jobs.get (x2) & jobs.getQueryResults (x2)
exceptions.NotFound("not normally retriable"),
job_resource,
exceptions.NotFound("not normally retriable"),
query_resource,
# Query still not done, repeat both.
exceptions.NotFound("not normally retriable"),
job_resource,
exceptions.NotFound("not normally retriable"),
query_resource,
exceptions.NotFound("not normally retriable"),
# Query still not done, repeat both.
job_resource_done,
exceptions.NotFound("not normally retriable"),
query_resource_done,
# Query finished!
)
client = _make_client(self.PROJECT, connection=connection)
job = self._get_target_class().from_api_repr(job_resource, client)

custom_predicate = mock.Mock()
custom_predicate.return_value = True
custom_retry = google.api_core.retry.Retry(
initial=0.001,
maximum=0.001,
multiplier=1.0,
deadline=0.1,
predicate=custom_predicate,
)

self.assertIsInstance(job.result(retry=custom_retry), RowIterator)
query_results_call = mock.call(
method="GET",
path=f"/projects/{self.PROJECT}/queries/{self.JOB_ID}",
query_params={"maxResults": 0, "location": "asia-northeast1"},
# TODO(tswast): Why do we end up setting timeout to
# google.cloud.bigquery.client._MIN_GET_QUERY_RESULTS_TIMEOUT in
# some cases but not others?
timeout=mock.ANY,
)
reload_call = mock.call(
method="GET",
path=f"/projects/{self.PROJECT}/jobs/{self.JOB_ID}",
query_params={"projection": "full", "location": "asia-northeast1"},
timeout=DEFAULT_GET_JOB_TIMEOUT,
)

> connection.api_request.assert_has_calls(
[
# See make_connection() call above for explanation of the
# expected API calls.
#
# Query not done.
reload_call,
reload_call,
query_results_call,
query_results_call,
# Query still not done.
reload_call,
reload_call,
query_results_call,
query_results_call,
# Query done!
reload_call,
reload_call,
query_results_call,
query_results_call,
]
)

tests/unit/job/test_query.py:1400:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self =
calls = [call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'asia-northe...'/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'asia-northeast1'}, timeout=128), ...]
any_order = False

def assert_has_calls(self, calls, any_order=False):
"""assert the mock has been called with the specified calls.
The `mock_calls` list is checked for the calls.

If `any_order` is False (the default) then the calls must be
sequential. There can be extra calls before or after the
specified calls.

If `any_order` is True then the calls can be in any order, but
they must all appear in `mock_calls`."""
expected = [self._call_matcher(c) for c in calls]
cause = next((e for e in expected if isinstance(e, Exception)), None)
all_calls = _CallList(self._call_matcher(c) for c in self.mock_calls)
if not any_order:
if expected not in all_calls:
if cause is None:
problem = 'Calls not found.'
else:
problem = ('Error processing expected calls.\n'
'Errors: {}').format(
[e if isinstance(e, Exception) else None
for e in expected])
> raise AssertionError(
f'{problem}\n'
f'Expected: {_CallList(calls)}'
f'{self._calls_repr(prefix="Actual").rstrip(".")}'
) from cause
E AssertionError: Calls not found.
E Expected: [call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'asia-northeast1'}, timeout=128),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'asia-northeast1'}, timeout=128),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'asia-northeast1'}, timeout=),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'asia-northeast1'}, timeout=),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'asia-northeast1'}, timeout=128),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'asia-northeast1'}, timeout=128),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'asia-northeast1'}, timeout=),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'asia-northeast1'}, timeout=),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'asia-northeast1'}, timeout=128),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'asia-northeast1'}, timeout=128),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'asia-northeast1'}, timeout=),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'asia-northeast1'}, timeout=)]
E Actual: [call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'asia-northeast1'}, timeout=128),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'asia-northeast1'}, timeout=128),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'asia-northeast1'}, timeout=None),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'asia-northeast1'}, timeout=None),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'asia-northeast1'}, timeout=128),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'asia-northeast1'}, timeout=128),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'asia-northeast1'}, timeout=None),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'asia-northeast1'}, timeout=None),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'asia-northeast1'}, timeout=128),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'asia-northeast1'}, timeout=128),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'asia-northeast1'}, timeout=None),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'asia-northeast1'}, timeout=None)]

/usr/local/lib/python3.8/unittest/mock.py:950: AssertionError
_____________________ TestQueryJob.test_result_w_page_size _____________________

self =

def test_result_w_page_size(self):
# Arrange
query_results_resource = {
"jobComplete": True,
"jobReference": {"projectId": self.PROJECT, "jobId": self.JOB_ID},
"schema": {"fields": [{"name": "col1", "type": "STRING"}]},
"totalRows": "10",
"rows": [
{"f": [{"v": "row1"}]},
{"f": [{"v": "row2"}]},
{"f": [{"v": "row3"}]},
{"f": [{"v": "row4"}]},
{"f": [{"v": "row5"}]},
{"f": [{"v": "row6"}]},
{"f": [{"v": "row7"}]},
{"f": [{"v": "row8"}]},
{"f": [{"v": "row9"}]},
],
"pageToken": "first-page-token",
}
job_resource_running = self._make_resource(
started=True, ended=False, location="US"
)
job_resource_done = self._make_resource(started=True, ended=True, location="US")
destination_table = {
"projectId": self.PROJECT,
"datasetId": self.DS_ID,
"tableId": self.TABLE_ID,
}
q_config = job_resource_done["configuration"]["query"]
q_config["destinationTable"] = destination_table
query_page_resource_2 = {"totalRows": 10, "rows": [{"f": [{"v": "row10"}]}]}
conn = make_connection(
job_resource_running,
query_results_resource,
job_resource_done,
query_page_resource_2,
)
client = _make_client(self.PROJECT, connection=conn)
job = self._get_target_class().from_api_repr(job_resource_running, client)

# Act
result = job.result(page_size=9)

# Assert
actual_rows = list(result)
self.assertEqual(len(actual_rows), 10)

jobs_get_path = f"/projects/{self.PROJECT}/jobs/{self.JOB_ID}"
jobs_get_call = mock.call(
method="GET",
path=jobs_get_path,
query_params={"projection": "full", "location": "US"},
timeout=DEFAULT_GET_JOB_TIMEOUT,
)
query_results_path = f"/projects/{self.PROJECT}/queries/{self.JOB_ID}"
query_page_waiting_call = mock.call(
method="GET",
path=query_results_path,
query_params={
# Waiting for the results should set maxResults and cache the
# first page if page_size is set. This allows customers to
# more finely tune when we fallback to the BQ Storage API.
# See internal issue: 344008814.
"maxResults": 9,
"location": "US",
"formatOptions.useInt64Timestamp": True,
},
timeout=None,
)
query_page_2_call = mock.call(
timeout=None,
method="GET",
path=query_results_path,
query_params={
"pageToken": "first-page-token",
"maxResults": 9,
"fields": _LIST_ROWS_FROM_QUERY_RESULTS_FIELDS,
"location": "US",
"formatOptions.useInt64Timestamp": True,
},
)
> conn.api_request.assert_has_calls(
[jobs_get_call, query_page_waiting_call, jobs_get_call, query_page_2_call]
)

tests/unit/job/test_query.py:1625:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self =
calls = [call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'US'}, timeo...ts': 9, 'fields': 'jobReference,totalRows,pageToken,rows', 'location': 'US', 'formatOptions.useInt64Timestamp': True})]
any_order = False

def assert_has_calls(self, calls, any_order=False):
"""assert the mock has been called with the specified calls.
The `mock_calls` list is checked for the calls.

If `any_order` is False (the default) then the calls must be
sequential. There can be extra calls before or after the
specified calls.

If `any_order` is True then the calls can be in any order, but
they must all appear in `mock_calls`."""
expected = [self._call_matcher(c) for c in calls]
cause = next((e for e in expected if isinstance(e, Exception)), None)
all_calls = _CallList(self._call_matcher(c) for c in self.mock_calls)
if not any_order:
if expected not in all_calls:
if cause is None:
problem = 'Calls not found.'
else:
problem = ('Error processing expected calls.\n'
'Errors: {}').format(
[e if isinstance(e, Exception) else None
for e in expected])
> raise AssertionError(
f'{problem}\n'
f'Expected: {_CallList(calls)}'
f'{self._calls_repr(prefix="Actual").rstrip(".")}'
) from cause
E AssertionError: Calls not found.
E Expected: [call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'US'}, timeout=128),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 9, 'location': 'US', 'formatOptions.useInt64Timestamp': True}, timeout=None),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'US'}, timeout=128),
E call(timeout=None, method='GET', path='/projects/project/queries/JOB_ID', query_params={'pageToken': 'first-page-token', 'maxResults': 9, 'fields': 'jobReference,totalRows,pageToken,rows', 'location': 'US', 'formatOptions.useInt64Timestamp': True})]
E Actual: [call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'US'}, timeout=128),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 9, 'formatOptions.useInt64Timestamp': True, 'location': 'US'}, timeout=None),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'US'}, timeout=128),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(timeout=None, method='GET', path='/projects/project/queries/JOB_ID', query_params={'pageToken': 'first-page-token', 'maxResults': 9, 'fields': 'jobReference,totalRows,pageToken,rows', 'location': 'US', 'formatOptions.useInt64Timestamp': True})]

/usr/local/lib/python3.8/unittest/mock.py:950: AssertionError
_______________ TestQueryJob.test_result_w_timeout_doesnt_raise ________________

self =

def test_result_w_timeout_doesnt_raise(self):
import google.cloud.bigquery.client

begun_resource = self._make_resource()
query_resource = {
"jobComplete": True,
"jobReference": {"projectId": self.PROJECT, "jobId": self.JOB_ID},
"schema": {"fields": [{"name": "col1", "type": "STRING"}]},
}
done_resource = copy.deepcopy(begun_resource)
done_resource["status"] = {"state": "DONE"}
connection = make_connection(begun_resource, query_resource, done_resource)
client = _make_client(project=self.PROJECT, connection=connection)
job = self._make_one(self.JOB_ID, self.QUERY, client)
job._properties["jobReference"]["location"] = "US"
job._properties["status"] = {"state": "RUNNING"}

with freezegun.freeze_time("1970-01-01 00:00:00", tick=False):
job.result(
# Test that fractional seconds are supported, but use a timeout
# that is representable as a floating point without rounding
# errors since it can be represented exactly in base 2. In this
# case 1.125 is 9 / 8, which is a fraction with a power of 2 in
# the denominator.
timeout=1.125,
)

reload_call = mock.call(
method="GET",
path=f"/projects/{self.PROJECT}/jobs/{self.JOB_ID}",
query_params={"projection": "full", "location": "US"},
timeout=1.125,
)
get_query_results_call = mock.call(
method="GET",
path=f"/projects/{self.PROJECT}/queries/{self.JOB_ID}",
query_params={
"maxResults": 0,
"location": "US",
},
timeout=google.cloud.bigquery.client._MIN_GET_QUERY_RESULTS_TIMEOUT,
)
> connection.api_request.assert_has_calls(
[
reload_call,
get_query_results_call,
reload_call,
]
)

tests/unit/job/test_query.py:1489:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self =
calls = [call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'US'}, timeo...hod='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'US'}, timeout=1.125)]
any_order = False

def assert_has_calls(self, calls, any_order=False):
"""assert the mock has been called with the specified calls.
The `mock_calls` list is checked for the calls.

If `any_order` is False (the default) then the calls must be
sequential. There can be extra calls before or after the
specified calls.

If `any_order` is True then the calls can be in any order, but
they must all appear in `mock_calls`."""
expected = [self._call_matcher(c) for c in calls]
cause = next((e for e in expected if isinstance(e, Exception)), None)
all_calls = _CallList(self._call_matcher(c) for c in self.mock_calls)
if not any_order:
if expected not in all_calls:
if cause is None:
problem = 'Calls not found.'
else:
problem = ('Error processing expected calls.\n'
'Errors: {}').format(
[e if isinstance(e, Exception) else None
for e in expected])
> raise AssertionError(
f'{problem}\n'
f'Expected: {_CallList(calls)}'
f'{self._calls_repr(prefix="Actual").rstrip(".")}'
) from cause
E AssertionError: Calls not found.
E Expected: [call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'US'}, timeout=1.125),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'US'}, timeout=120),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'US'}, timeout=1.125)]
E Actual: [call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'US'}, timeout=1.125),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'US'}, timeout=120),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'US'}, timeout=1.125)]

/usr/local/lib/python3.8/unittest/mock.py:950: AssertionError
_____ TestQueryJob.test_result_w_timeout_raises_concurrent_futures_timeout _____

self =

def test_result_w_timeout_raises_concurrent_futures_timeout(self):
import google.cloud.bigquery.client

begun_resource = self._make_resource()
begun_resource["jobReference"]["location"] = "US"
query_resource = {
"jobComplete": True,
"jobReference": {"projectId": self.PROJECT, "jobId": self.JOB_ID},
"schema": {"fields": [{"name": "col1", "type": "STRING"}]},
}
done_resource = copy.deepcopy(begun_resource)
done_resource["status"] = {"state": "DONE"}
connection = make_connection(begun_resource, query_resource, done_resource)
client = _make_client(project=self.PROJECT, connection=connection)
job = self._make_one(self.JOB_ID, self.QUERY, client)
job._properties["jobReference"]["location"] = "US"
job._properties["status"] = {"state": "RUNNING"}

with freezegun.freeze_time(
"1970-01-01 00:00:00", auto_tick_seconds=1.0
), self.assertRaises(concurrent.futures.TimeoutError):
job.result(timeout=1.125)

reload_call = mock.call(
method="GET",
path=f"/projects/{self.PROJECT}/jobs/{self.JOB_ID}",
query_params={"projection": "full", "location": "US"},
timeout=1.125,
)
get_query_results_call = mock.call(
method="GET",
path=f"/projects/{self.PROJECT}/queries/{self.JOB_ID}",
query_params={
"maxResults": 0,
"location": "US",
},
timeout=google.cloud.bigquery.client._MIN_GET_QUERY_RESULTS_TIMEOUT,
)
> connection.api_request.assert_has_calls(
[
reload_call,
get_query_results_call,
# Timeout before we can reload with the final job state.
]
)

tests/unit/job/test_query.py:1535:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self =
calls = [call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'US'}, timeo...(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'US'}, timeout=120)]
any_order = False

def assert_has_calls(self, calls, any_order=False):
"""assert the mock has been called with the specified calls.
The `mock_calls` list is checked for the calls.

If `any_order` is False (the default) then the calls must be
sequential. There can be extra calls before or after the
specified calls.

If `any_order` is True then the calls can be in any order, but
they must all appear in `mock_calls`."""
expected = [self._call_matcher(c) for c in calls]
cause = next((e for e in expected if isinstance(e, Exception)), None)
all_calls = _CallList(self._call_matcher(c) for c in self.mock_calls)
if not any_order:
if expected not in all_calls:
if cause is None:
problem = 'Calls not found.'
else:
problem = ('Error processing expected calls.\n'
'Errors: {}').format(
[e if isinstance(e, Exception) else None
for e in expected])
> raise AssertionError(
f'{problem}\n'
f'Expected: {_CallList(calls)}'
f'{self._calls_repr(prefix="Actual").rstrip(".")}'
) from cause
E AssertionError: Calls not found.
E Expected: [call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'US'}, timeout=1.125),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'US'}, timeout=120)]
E Actual: [call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'US'}, timeout=1.125),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'US'}, timeout=120)]

/usr/local/lib/python3.8/unittest/mock.py:950: AssertionError
________ TestQueryJob.test_result_with_done_job_calls_get_query_results ________

self =

def test_result_with_done_job_calls_get_query_results(self):
query_resource_done = {
"jobComplete": True,
"jobReference": {"projectId": self.PROJECT, "jobId": self.JOB_ID},
"schema": {"fields": [{"name": "col1", "type": "STRING"}]},
"totalRows": "1",
}
job_resource = self._make_resource(started=True, ended=True, location="EU")
job_resource["configuration"]["query"]["destinationTable"] = {
"projectId": "dest-project",
"datasetId": "dest_dataset",
"tableId": "dest_table",
}
results_page_resource = {
"totalRows": "1",
"pageToken": None,
"rows": [{"f": [{"v": "abc"}]}],
}
conn = make_connection(query_resource_done, results_page_resource)
client = _make_client(self.PROJECT, connection=conn)
job = self._get_target_class().from_api_repr(job_resource, client)

result = job.result()

rows = list(result)
self.assertEqual(len(rows), 1)
self.assertEqual(rows[0].col1, "abc")

query_results_path = f"/projects/{self.PROJECT}/queries/{self.JOB_ID}"
query_results_call = mock.call(
method="GET",
path=query_results_path,
query_params={"maxResults": 0, "location": "EU"},
timeout=None,
)
query_results_page_call = mock.call(
method="GET",
path=query_results_path,
query_params={
"fields": _LIST_ROWS_FROM_QUERY_RESULTS_FIELDS,
"location": "EU",
"formatOptions.useInt64Timestamp": True,
},
timeout=None,
)
> conn.api_request.assert_has_calls([query_results_call, query_results_page_call])

tests/unit/job/test_query.py:1165:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self =
calls = [call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'EU'}, timeout...s': 'jobReference,totalRows,pageToken,rows', 'location': 'EU', 'formatOptions.useInt64Timestamp': True}, timeout=None)]
any_order = False

def assert_has_calls(self, calls, any_order=False):
"""assert the mock has been called with the specified calls.
The `mock_calls` list is checked for the calls.

If `any_order` is False (the default) then the calls must be
sequential. There can be extra calls before or after the
specified calls.

If `any_order` is True then the calls can be in any order, but
they must all appear in `mock_calls`."""
expected = [self._call_matcher(c) for c in calls]
cause = next((e for e in expected if isinstance(e, Exception)), None)
all_calls = _CallList(self._call_matcher(c) for c in self.mock_calls)
if not any_order:
if expected not in all_calls:
if cause is None:
problem = 'Calls not found.'
else:
problem = ('Error processing expected calls.\n'
'Errors: {}').format(
[e if isinstance(e, Exception) else None
for e in expected])
> raise AssertionError(
f'{problem}\n'
f'Expected: {_CallList(calls)}'
f'{self._calls_repr(prefix="Actual").rstrip(".")}'
) from cause
E AssertionError: Calls not found.
E Expected: [call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'EU'}, timeout=None),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'fields': 'jobReference,totalRows,pageToken,rows', 'location': 'EU', 'formatOptions.useInt64Timestamp': True}, timeout=None)]
E Actual: [call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'EU'}, timeout=None),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(timeout=None, method='GET', path='/projects/project/queries/JOB_ID', query_params={'fields': 'jobReference,totalRows,pageToken,rows', 'location': 'EU', 'formatOptions.useInt64Timestamp': True})]

/usr/local/lib/python3.8/unittest/mock.py:950: AssertionError
__________________ TestQueryJob.test_result_with_max_results ___________________

self =

def test_result_with_max_results(self):
from google.cloud.bigquery.table import RowIterator

query_resource = {
"jobComplete": True,
"jobReference": {"projectId": self.PROJECT, "jobId": self.JOB_ID},
"schema": {"fields": [{"name": "col1", "type": "STRING"}]},
"totalRows": "10",
"pageToken": "first-page-token",
"rows": [
{"f": [{"v": "abc"}]},
{"f": [{"v": "def"}]},
{"f": [{"v": "ghi"}]},
{"f": [{"v": "jkl"}]},
{"f": [{"v": "mno"}]},
{"f": [{"v": "pqr"}]},
# Pretend these are very large rows, so the API doesn't return
# all of the rows we asked for in the first response.
],
}
query_page_resource = {
"totalRows": "10",
"pageToken": None,
"rows": [
{"f": [{"v": "stu"}]},
{"f": [{"v": "vwx"}]},
{"f": [{"v": "yz0"}]},
],
}
job_resource_running = self._make_resource(
started=True, ended=False, location="US"
)
job_resource_done = self._make_resource(started=True, ended=True, location="US")
conn = make_connection(job_resource_done, query_resource, query_page_resource)
client = _make_client(self.PROJECT, connection=conn)
job = self._get_target_class().from_api_repr(job_resource_running, client)

max_results = 9
result = job.result(max_results=max_results)

self.assertIsInstance(result, RowIterator)
self.assertEqual(result.total_rows, 10)

rows = list(result)

self.assertEqual(len(rows), 9)
jobs_get_path = f"/projects/{self.PROJECT}/jobs/{self.JOB_ID}"
jobs_get_call = mock.call(
method="GET",
path=jobs_get_path,
query_params={"projection": "full", "location": "US"},
timeout=DEFAULT_GET_JOB_TIMEOUT,
)
query_results_path = f"/projects/{self.PROJECT}/queries/{self.JOB_ID}"
query_page_waiting_call = mock.call(
method="GET",
path=query_results_path,
query_params={
# Waiting for the results should set maxResults and cache the
# first page if page_size is set. This allows customers to
# more finely tune when we fallback to the BQ Storage API.
# See internal issue: 344008814.
"maxResults": max_results,
"formatOptions.useInt64Timestamp": True,
"location": "US",
},
timeout=None,
)
query_page_2_call = mock.call(
timeout=None,
method="GET",
path=query_results_path,
query_params={
"pageToken": "first-page-token",
"maxResults": 3,
"fields": _LIST_ROWS_FROM_QUERY_RESULTS_FIELDS,
"location": "US",
"formatOptions.useInt64Timestamp": True,
},
)
# Waiting for the results should set maxResults and cache the
# first page if max_results is set. This allows customers to
# more finely tune when we fallback to the BQ Storage API.
# See internal issue: 344008814.
> conn.api_request.assert_has_calls(
[jobs_get_call, query_page_waiting_call, query_page_2_call]
)

tests/unit/job/test_query.py:1323:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self =
calls = [call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'US'}, timeo...ts': 3, 'fields': 'jobReference,totalRows,pageToken,rows', 'location': 'US', 'formatOptions.useInt64Timestamp': True})]
any_order = False

def assert_has_calls(self, calls, any_order=False):
"""assert the mock has been called with the specified calls.
The `mock_calls` list is checked for the calls.

If `any_order` is False (the default) then the calls must be
sequential. There can be extra calls before or after the
specified calls.

If `any_order` is True then the calls can be in any order, but
they must all appear in `mock_calls`."""
expected = [self._call_matcher(c) for c in calls]
cause = next((e for e in expected if isinstance(e, Exception)), None)
all_calls = _CallList(self._call_matcher(c) for c in self.mock_calls)
if not any_order:
if expected not in all_calls:
if cause is None:
problem = 'Calls not found.'
else:
problem = ('Error processing expected calls.\n'
'Errors: {}').format(
[e if isinstance(e, Exception) else None
for e in expected])
> raise AssertionError(
f'{problem}\n'
f'Expected: {_CallList(calls)}'
f'{self._calls_repr(prefix="Actual").rstrip(".")}'
) from cause
E AssertionError: Calls not found.
E Expected: [call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'US'}, timeout=128),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 9, 'formatOptions.useInt64Timestamp': True, 'location': 'US'}, timeout=None),
E call(timeout=None, method='GET', path='/projects/project/queries/JOB_ID', query_params={'pageToken': 'first-page-token', 'maxResults': 3, 'fields': 'jobReference,totalRows,pageToken,rows', 'location': 'US', 'formatOptions.useInt64Timestamp': True})]
E Actual: [call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'US'}, timeout=128),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 9, 'formatOptions.useInt64Timestamp': True, 'location': 'US'}, timeout=None),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(timeout=None, method='GET', path='/projects/project/queries/JOB_ID', query_params={'pageToken': 'first-page-token', 'maxResults': 3, 'fields': 'jobReference,totalRows,pageToken,rows', 'location': 'US', 'formatOptions.useInt64Timestamp': True})]

/usr/local/lib/python3.8/unittest/mock.py:950: AssertionError
_____________ TestClient.test_create_routine_w_conflict_exists_ok ______________

self =

def test_create_routine_w_conflict_exists_ok(self):
from google.cloud.bigquery.routine import Routine

creds = _make_credentials()
client = self._make_one(project=self.PROJECT, credentials=creds)
resource = {
"routineReference": {
"projectId": "test-routine-project",
"datasetId": "test_routines",
"routineId": "minimal_routine",
}
}
path = "/projects/test-routine-project/datasets/test_routines/routines"

conn = client._connection = make_connection(
google.api_core.exceptions.AlreadyExists("routine already exists"), resource
)
full_routine_id = "test-routine-project.test_routines.minimal_routine"
routine = Routine(full_routine_id)
with mock.patch(
"google.cloud.bigquery.opentelemetry_tracing._get_final_span_attributes"
) as final_attributes:
actual_routine = client.create_routine(routine, exists_ok=True)

final_attributes.assert_called_with(
{"path": "%s/minimal_routine" % path}, client, None
)

self.assertEqual(actual_routine.project, "test-routine-project")
self.assertEqual(actual_routine.dataset_id, "test_routines")
self.assertEqual(actual_routine.routine_id, "minimal_routine")
> conn.api_request.assert_has_calls(
[
mock.call(
method="POST",
path=path,
data=resource,
timeout=DEFAULT_TIMEOUT,
),
mock.call(
method="GET",
path="/projects/test-routine-project/datasets/test_routines/routines/minimal_routine",
timeout=DEFAULT_TIMEOUT,
),
]
)

tests/unit/test_client.py:1058:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self =
calls = [call(method='POST', path='/projects/test-routine-project/datasets/test_routines/routines', data={'routineReference': ...all(method='GET', path='/projects/test-routine-project/datasets/test_routines/routines/minimal_routine', timeout=None)]
any_order = False

def assert_has_calls(self, calls, any_order=False):
"""assert the mock has been called with the specified calls.
The `mock_calls` list is checked for the calls.

If `any_order` is False (the default) then the calls must be
sequential. There can be extra calls before or after the
specified calls.

If `any_order` is True then the calls can be in any order, but
they must all appear in `mock_calls`."""
expected = [self._call_matcher(c) for c in calls]
cause = next((e for e in expected if isinstance(e, Exception)), None)
all_calls = _CallList(self._call_matcher(c) for c in self.mock_calls)
if not any_order:
if expected not in all_calls:
if cause is None:
problem = 'Calls not found.'
else:
problem = ('Error processing expected calls.\n'
'Errors: {}').format(
[e if isinstance(e, Exception) else None
for e in expected])
> raise AssertionError(
f'{problem}\n'
f'Expected: {_CallList(calls)}'
f'{self._calls_repr(prefix="Actual").rstrip(".")}'
) from cause
E AssertionError: Calls not found.
E Expected: [call(method='POST', path='/projects/test-routine-project/datasets/test_routines/routines', data={'routineReference': {'projectId': 'test-routine-project', 'datasetId': 'test_routines', 'routineId': 'minimal_routine'}}, timeout=None),
E call(method='GET', path='/projects/test-routine-project/datasets/test_routines/routines/minimal_routine', timeout=None)]
E Actual: [call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='POST', path='/projects/test-routine-project/datasets/test_routines/routines', data={'routineReference': {'projectId': 'test-routine-project', 'datasetId': 'test_routines', 'routineId': 'minimal_routine'}}, timeout=None),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/test-routine-project/datasets/test_routines/routines/minimal_routine', timeout=None)]

/usr/local/lib/python3.8/unittest/mock.py:950: AssertionError
_________ TestClient.test_create_table_alreadyexists_w_exists_ok_true __________

self =

def test_create_table_alreadyexists_w_exists_ok_true(self):
post_path = "/projects/{}/datasets/{}/tables".format(self.PROJECT, self.DS_ID)
get_path = "/projects/{}/datasets/{}/tables/{}".format(
self.PROJECT, self.DS_ID, self.TABLE_ID
)
resource = self._make_table_resource()
creds = _make_credentials()
client = self._make_one(
project=self.PROJECT, credentials=creds, location=self.LOCATION
)
conn = client._connection = make_connection(
google.api_core.exceptions.AlreadyExists("table already exists"), resource
)

with mock.patch(
"google.cloud.bigquery.opentelemetry_tracing._get_final_span_attributes"
) as final_attributes:
got = client.create_table(
"{}.{}".format(self.DS_ID, self.TABLE_ID), exists_ok=True
)

final_attributes.assert_called_with({"path": get_path}, client, None)

self.assertEqual(got.project, self.PROJECT)
self.assertEqual(got.dataset_id, self.DS_ID)
self.assertEqual(got.table_id, self.TABLE_ID)

> conn.api_request.assert_has_calls(
[
mock.call(
method="POST",
path=post_path,
data={
"tableReference": {
"projectId": self.PROJECT,
"datasetId": self.DS_ID,
"tableId": self.TABLE_ID,
},
"labels": {},
},
timeout=DEFAULT_TIMEOUT,
),
mock.call(method="GET", path=get_path, timeout=DEFAULT_TIMEOUT),
]
)

tests/unit/test_client.py:1510:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self =
calls = [call(method='POST', path='/projects/PROJECT/datasets/DATASET_ID/tables', data={'tableReference': {'projectId': 'PROJE...s': {}}, timeout=None), call(method='GET', path='/projects/PROJECT/datasets/DATASET_ID/tables/TABLE_ID', timeout=None)]
any_order = False

def assert_has_calls(self, calls, any_order=False):
"""assert the mock has been called with the specified calls.
The `mock_calls` list is checked for the calls.

If `any_order` is False (the default) then the calls must be
sequential. There can be extra calls before or after the
specified calls.

If `any_order` is True then the calls can be in any order, but
they must all appear in `mock_calls`."""
expected = [self._call_matcher(c) for c in calls]
cause = next((e for e in expected if isinstance(e, Exception)), None)
all_calls = _CallList(self._call_matcher(c) for c in self.mock_calls)
if not any_order:
if expected not in all_calls:
if cause is None:
problem = 'Calls not found.'
else:
problem = ('Error processing expected calls.\n'
'Errors: {}').format(
[e if isinstance(e, Exception) else None
for e in expected])
> raise AssertionError(
f'{problem}\n'
f'Expected: {_CallList(calls)}'
f'{self._calls_repr(prefix="Actual").rstrip(".")}'
) from cause
E AssertionError: Calls not found.
E Expected: [call(method='POST', path='/projects/PROJECT/datasets/DATASET_ID/tables', data={'tableReference': {'projectId': 'PROJECT', 'datasetId': 'DATASET_ID', 'tableId': 'TABLE_ID'}, 'labels': {}}, timeout=None),
E call(method='GET', path='/projects/PROJECT/datasets/DATASET_ID/tables/TABLE_ID', timeout=None)]
E Actual: [call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='POST', path='/projects/PROJECT/datasets/DATASET_ID/tables', data={'tableReference': {'projectId': 'PROJECT', 'datasetId': 'DATASET_ID', 'tableId': 'TABLE_ID'}, 'labels': {}}, timeout=None),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/PROJECT/datasets/DATASET_ID/tables/TABLE_ID', timeout=None)]

/usr/local/lib/python3.8/unittest/mock.py:950: AssertionError
_____________ TestClient.test_list_rows_w_start_index_w_page_size ______________

self =

def test_list_rows_w_start_index_w_page_size(self):
from google.cloud.bigquery.schema import SchemaField
from google.cloud.bigquery.table import Table
from google.cloud.bigquery.table import Row

PATH = "projects/%s/datasets/%s/tables/%s/data" % (
self.PROJECT,
self.DS_ID,
self.TABLE_ID,
)

page_1 = {
"totalRows": 4,
"pageToken": "some-page-token",
"rows": [
{"f": [{"v": "Phred Phlyntstone"}]},
{"f": [{"v": "Bharney Rhubble"}]},
],
}
page_2 = {
"totalRows": 4,
"rows": [
{"f": [{"v": "Wylma Phlyntstone"}]},
{"f": [{"v": "Bhettye Rhubble"}]},
],
}
creds = _make_credentials()
http = object()
client = self._make_one(project=self.PROJECT, credentials=creds, _http=http)
conn = client._connection = make_connection(page_1, page_2)
full_name = SchemaField("full_name", "STRING", mode="REQUIRED")
table = Table(self.TABLE_REF, schema=[full_name])
iterator = client.list_rows(table, max_results=4, page_size=2, start_index=1)
pages = iterator.pages
rows = list(next(pages))
extra_params = iterator.extra_params
f2i = {"full_name": 0}
self.assertEqual(len(rows), 2)
self.assertEqual(rows[0], Row(("Phred Phlyntstone",), f2i))
self.assertEqual(rows[1], Row(("Bharney Rhubble",), f2i))

rows = list(next(pages))

self.assertEqual(len(rows), 2)
self.assertEqual(rows[0], Row(("Wylma Phlyntstone",), f2i))
self.assertEqual(rows[1], Row(("Bhettye Rhubble",), f2i))
self.assertEqual(
extra_params, {"startIndex": 1, "formatOptions.useInt64Timestamp": True}
)

> conn.api_request.assert_has_calls(
[
mock.call(
method="GET",
path="/%s" % PATH,
query_params={
"startIndex": 1,
"maxResults": 2,
"formatOptions.useInt64Timestamp": True,
},
timeout=DEFAULT_TIMEOUT,
),
mock.call(
method="GET",
path="/%s" % PATH,
query_params={
"pageToken": "some-page-token",
"maxResults": 2,
"formatOptions.useInt64Timestamp": True,
},
timeout=DEFAULT_TIMEOUT,
),
]
)

tests/unit/test_client.py:6858:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self =
calls = [call(method='GET', path='/projects/PROJECT/datasets/DATASET_ID/tables/TABLE_ID/data', query_params={'startIndex': 1, ...query_params={'pageToken': 'some-page-token', 'maxResults': 2, 'formatOptions.useInt64Timestamp': True}, timeout=None)]
any_order = False

def assert_has_calls(self, calls, any_order=False):
"""assert the mock has been called with the specified calls.
The `mock_calls` list is checked for the calls.

If `any_order` is False (the default) then the calls must be
sequential. There can be extra calls before or after the
specified calls.

If `any_order` is True then the calls can be in any order, but
they must all appear in `mock_calls`."""
expected = [self._call_matcher(c) for c in calls]
cause = next((e for e in expected if isinstance(e, Exception)), None)
all_calls = _CallList(self._call_matcher(c) for c in self.mock_calls)
if not any_order:
if expected not in all_calls:
if cause is None:
problem = 'Calls not found.'
else:
problem = ('Error processing expected calls.\n'
'Errors: {}').format(
[e if isinstance(e, Exception) else None
for e in expected])
> raise AssertionError(
f'{problem}\n'
f'Expected: {_CallList(calls)}'
f'{self._calls_repr(prefix="Actual").rstrip(".")}'
) from cause
E AssertionError: Calls not found.
E Expected: [call(method='GET', path='/projects/PROJECT/datasets/DATASET_ID/tables/TABLE_ID/data', query_params={'startIndex': 1, 'maxResults': 2, 'formatOptions.useInt64Timestamp': True}, timeout=None),
E call(method='GET', path='/projects/PROJECT/datasets/DATASET_ID/tables/TABLE_ID/data', query_params={'pageToken': 'some-page-token', 'maxResults': 2, 'formatOptions.useInt64Timestamp': True}, timeout=None)]
E Actual: [call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(timeout=None, method='GET', path='/projects/PROJECT/datasets/DATASET_ID/tables/TABLE_ID/data', query_params={'maxResults': 2, 'startIndex': 1, 'formatOptions.useInt64Timestamp': True}),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(timeout=None, method='GET', path='/projects/PROJECT/datasets/DATASET_ID/tables/TABLE_ID/data', query_params={'pageToken': 'some-page-token', 'maxResults': 2, 'formatOptions.useInt64Timestamp': True})]

/usr/local/lib/python3.8/unittest/mock.py:950: AssertionError
_________ TestClient.test_query_and_wait_w_page_size_multiple_requests _________

self =

def test_query_and_wait_w_page_size_multiple_requests(self):
"""
For queries that last longer than the intial (about 10s) call to
jobs.query, we should still pass through the page size to the
subsequent calls to jobs.getQueryResults.

See internal issue 344008814.
"""
query = "select count(*) from `bigquery-public-data.usa_names.usa_1910_2013`"
job_reference = {
"projectId": "my-jobs-project",
"location": "my-jobs-location",
"jobId": "my-jobs-id",
}
jobs_query_response = {
"jobComplete": False,
"jobReference": job_reference,
}
jobs_get_response = {
"jobReference": job_reference,
"status": {"state": "DONE"},
}
get_query_results_response = {
"jobComplete": True,
}
creds = _make_credentials()
http = object()
client = self._make_one(project=self.PROJECT, credentials=creds, _http=http)
conn = client._connection = make_connection(
jobs_query_response,
jobs_get_response,
get_query_results_response,
)

_ = client.query_and_wait(query, page_size=11)

> conn.api_request.assert_has_calls(
[
# Verify the request we send is to jobs.query.
mock.call(
method="POST",
path=f"/projects/{self.PROJECT}/queries",
data={
"useLegacySql": False,
"query": query,
"formatOptions": {"useInt64Timestamp": True},
"maxResults": 11,
"requestId": mock.ANY,
},
timeout=None,
),
# jobs.get: Check if the job has finished.
mock.call(
method="GET",
path="/projects/my-jobs-project/jobs/my-jobs-id",
query_params={
"projection": "full",
"location": "my-jobs-location",
},
timeout=google.cloud.bigquery.retry.DEFAULT_GET_JOB_TIMEOUT,
),
# jobs.getQueryResults: wait for the query / fetch first page
mock.call(
method="GET",
path="/projects/my-jobs-project/queries/my-jobs-id",
query_params={
# We should still pass through the page size to the
# subsequent calls to jobs.getQueryResults.
#
# See internal issue 344008814.
"maxResults": 11,
"formatOptions.useInt64Timestamp": True,
"location": "my-jobs-location",
},
timeout=None,
),
]
)

tests/unit/test_client.py:5526:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self =
calls = [call(method='POST', path='/projects/PROJECT/queries', data={'useLegacySql': False, 'query': 'select count(*) from `bi...uery_params={'maxResults': 11, 'formatOptions.useInt64Timestamp': True, 'location': 'my-jobs-location'}, timeout=None)]
any_order = False

def assert_has_calls(self, calls, any_order=False):
"""assert the mock has been called with the specified calls.
The `mock_calls` list is checked for the calls.

If `any_order` is False (the default) then the calls must be
sequential. There can be extra calls before or after the
specified calls.

If `any_order` is True then the calls can be in any order, but
they must all appear in `mock_calls`."""
expected = [self._call_matcher(c) for c in calls]
cause = next((e for e in expected if isinstance(e, Exception)), None)
all_calls = _CallList(self._call_matcher(c) for c in self.mock_calls)
if not any_order:
if expected not in all_calls:
if cause is None:
problem = 'Calls not found.'
else:
problem = ('Error processing expected calls.\n'
'Errors: {}').format(
[e if isinstance(e, Exception) else None
for e in expected])
> raise AssertionError(
f'{problem}\n'
f'Expected: {_CallList(calls)}'
f'{self._calls_repr(prefix="Actual").rstrip(".")}'
) from cause
E AssertionError: Calls not found.
E Expected: [call(method='POST', path='/projects/PROJECT/queries', data={'useLegacySql': False, 'query': 'select count(*) from `bigquery-public-data.usa_names.usa_1910_2013`', 'formatOptions': {'useInt64Timestamp': True}, 'maxResults': 11, 'requestId': }, timeout=None),
E call(method='GET', path='/projects/my-jobs-project/jobs/my-jobs-id', query_params={'projection': 'full', 'location': 'my-jobs-location'}, timeout=128),
E call(method='GET', path='/projects/my-jobs-project/queries/my-jobs-id', query_params={'maxResults': 11, 'formatOptions.useInt64Timestamp': True, 'location': 'my-jobs-location'}, timeout=None)]
E Actual: [call(method='POST', path='/projects/PROJECT/queries', data={'useLegacySql': False, 'formatOptions': {'useInt64Timestamp': True}, 'query': 'select count(*) from `bigquery-public-data.usa_names.usa_1910_2013`', 'maxResults': 11, 'requestId': '66a8cb39-81a0-439f-a365-2879af50cbc1'}, timeout=None),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/my-jobs-project/jobs/my-jobs-id', query_params={'projection': 'full', 'location': 'my-jobs-location'}, timeout=128),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/my-jobs-project/queries/my-jobs-id', query_params={'maxResults': 11, 'formatOptions.useInt64Timestamp': True, 'location': 'my-jobs-location'}, timeout=None)]

/usr/local/lib/python3.8/unittest/mock.py:950: AssertionError
______________ test_create_dataset_alreadyexists_w_exists_ok_true ______________

PROJECT = 'PROJECT', DS_ID = 'DATASET_ID', LOCATION = 'us-central'

def test_create_dataset_alreadyexists_w_exists_ok_true(PROJECT, DS_ID, LOCATION):
post_path = "/projects/{}/datasets".format(PROJECT)
get_path = "/projects/{}/datasets/{}".format(PROJECT, DS_ID)
resource = {
"datasetReference": {"projectId": PROJECT, "datasetId": DS_ID},
"etag": "etag",
"id": "{}:{}".format(PROJECT, DS_ID),
"location": LOCATION,
}
client = make_client(location=LOCATION)
conn = client._connection = make_connection(
google.api_core.exceptions.AlreadyExists("dataset already exists"), resource
)
with mock.patch(
"google.cloud.bigquery.opentelemetry_tracing._get_final_span_attributes"
) as final_attributes:
dataset = client.create_dataset(DS_ID, exists_ok=True)

final_attributes.assert_called_with({"path": get_path}, client, None)

assert dataset.dataset_id == DS_ID
assert dataset.project == PROJECT
assert dataset.etag == resource["etag"]
assert dataset.full_dataset_id == resource["id"]
assert dataset.location == LOCATION

> conn.api_request.assert_has_calls(
[
mock.call(
method="POST",
path=post_path,
data={
"datasetReference": {"projectId": PROJECT, "datasetId": DS_ID},
"labels": {},
"location": LOCATION,
},
timeout=DEFAULT_TIMEOUT,
),
mock.call(method="GET", path=get_path, timeout=DEFAULT_TIMEOUT),
]
)

tests/unit/test_create_dataset.py:358:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self =
calls = [call(method='POST', path='/projects/PROJECT/datasets', data={'datasetReference': {'projectId': 'PROJECT', 'datasetId'...ocation': 'us-central'}, timeout=None), call(method='GET', path='/projects/PROJECT/datasets/DATASET_ID', timeout=None)]
any_order = False

def assert_has_calls(self, calls, any_order=False):
"""assert the mock has been called with the specified calls.
The `mock_calls` list is checked for the calls.

If `any_order` is False (the default) then the calls must be
sequential. There can be extra calls before or after the
specified calls.

If `any_order` is True then the calls can be in any order, but
they must all appear in `mock_calls`."""
expected = [self._call_matcher(c) for c in calls]
cause = next((e for e in expected if isinstance(e, Exception)), None)
all_calls = _CallList(self._call_matcher(c) for c in self.mock_calls)
if not any_order:
if expected not in all_calls:
if cause is None:
problem = 'Calls not found.'
else:
problem = ('Error processing expected calls.\n'
'Errors: {}').format(
[e if isinstance(e, Exception) else None
for e in expected])
> raise AssertionError(
f'{problem}\n'
f'Expected: {_CallList(calls)}'
f'{self._calls_repr(prefix="Actual").rstrip(".")}'
) from cause
E AssertionError: Calls not found.
E Expected: [call(method='POST', path='/projects/PROJECT/datasets', data={'datasetReference': {'projectId': 'PROJECT', 'datasetId': 'DATASET_ID'}, 'labels': {}, 'location': 'us-central'}, timeout=None),
E call(method='GET', path='/projects/PROJECT/datasets/DATASET_ID', timeout=None)]
E Actual: [call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='POST', path='/projects/PROJECT/datasets', data={'datasetReference': {'projectId': 'PROJECT', 'datasetId': 'DATASET_ID'}, 'labels': {}, 'location': 'us-central'}, timeout=None),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/PROJECT/datasets/DATASET_ID', timeout=None)]

/usr/local/lib/python3.8/unittest/mock.py:950: AssertionError
__ test_retry_connection_error_with_default_retries_and_successful_first_job ___

monkeypatch = <_pytest.monkeypatch.MonkeyPatch object at 0x7fe8ce8d8190>
client =

def test_retry_connection_error_with_default_retries_and_successful_first_job(
monkeypatch, client
):
"""
Make sure ConnectionError can be retried at `is_job_done` level, even if
retries are exhaused by API-level retry.

Note: Because restart_query_job is set to True only in the case of a
confirmed job failure, this should be safe to do even when a job is not
idempotent.

Regression test for issue
https://github.com/googleapis/python-bigquery/issues/1929
"""
job_counter = 0

def make_job_id(*args, **kwargs):
nonlocal job_counter
job_counter += 1
return f"{job_counter}"

monkeypatch.setattr(_job_helpers, "make_job_id", make_job_id)
conn = client._connection = make_connection()
project = client.project
job_reference_1 = {"projectId": project, "jobId": "1", "location": "test-loc"}
NUM_API_RETRIES = 2

with freezegun.freeze_time(
"2024-01-01 00:00:00",
# Note: because of exponential backoff and a bit of jitter,
# NUM_API_RETRIES will get less accurate the greater the value.
# We add 1 because we know there will be at least some additional
# calls to fetch the time / sleep before the retry deadline is hit.
auto_tick_seconds=(
google.cloud.bigquery.retry._DEFAULT_RETRY_DEADLINE / NUM_API_RETRIES
)
+ 1,
):
conn.api_request.side_effect = [
# jobs.insert
{"jobReference": job_reference_1, "status": {"state": "PENDING"}},
# jobs.get
{"jobReference": job_reference_1, "status": {"state": "RUNNING"}},
# jobs.getQueryResults x2
requests.exceptions.ConnectionError(),
requests.exceptions.ConnectionError(),
# jobs.get
# Job actually succeeeded, so we shouldn't be restarting the job,
# even though we are retrying at the `is_job_done` level.
{"jobReference": job_reference_1, "status": {"state": "DONE"}},
# jobs.getQueryResults
{"jobReference": job_reference_1, "jobComplete": True},
]

job = client.query("select 1")
rows_iter = job.result()

assert job.done() # Shouldn't make any additional API calls.
assert rows_iter is not None

# Should only have created one job, even though we did call job_retry.
assert job_counter == 1

# Double-check that we made the API calls we expected to make.
> conn.api_request.assert_has_calls(
[
# jobs.insert
mock.call(
method="POST",
path="/projects/PROJECT/jobs",
data={
"jobReference": {"jobId": "1", "projectId": "PROJECT"},
"configuration": {
"query": {"useLegacySql": False, "query": "select 1"}
},
},
timeout=None,
),
# jobs.get
mock.call(
method="GET",
path="/projects/PROJECT/jobs/1",
query_params={"location": "test-loc", "projection": "full"},
timeout=google.cloud.bigquery.retry.DEFAULT_GET_JOB_TIMEOUT,
),
# jobs.getQueryResults x2
mock.call(
method="GET",
path="/projects/PROJECT/queries/1",
query_params={"maxResults": 0, "location": "test-loc"},
timeout=None,
),
mock.call(
method="GET",
path="/projects/PROJECT/queries/1",
query_params={"maxResults": 0, "location": "test-loc"},
timeout=None,
),
# jobs.get -- is_job_done checking again
mock.call(
method="GET",
path="/projects/PROJECT/jobs/1",
query_params={"location": "test-loc", "projection": "full"},
timeout=google.cloud.bigquery.retry.DEFAULT_GET_JOB_TIMEOUT,
),
# jobs.getQueryResults
mock.call(
method="GET",
path="/projects/PROJECT/queries/1",
query_params={"maxResults": 0, "location": "test-loc"},
timeout=None,
),
],
)

tests/unit/test_job_retry.py:204:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self =
calls = [call(method='POST', path='/projects/PROJECT/jobs', data={'jobReference': {'jobId': '1', 'projectId': 'PROJECT'}, 'con...ethod='GET', path='/projects/PROJECT/queries/1', query_params={'maxResults': 0, 'location': 'test-loc'}, timeout=None)]
any_order = False

def assert_has_calls(self, calls, any_order=False):
"""assert the mock has been called with the specified calls.
The `mock_calls` list is checked for the calls.

If `any_order` is False (the default) then the calls must be
sequential. There can be extra calls before or after the
specified calls.

If `any_order` is True then the calls can be in any order, but
they must all appear in `mock_calls`."""
expected = [self._call_matcher(c) for c in calls]
cause = next((e for e in expected if isinstance(e, Exception)), None)
all_calls = _CallList(self._call_matcher(c) for c in self.mock_calls)
if not any_order:
if expected not in all_calls:
if cause is None:
problem = 'Calls not found.'
else:
problem = ('Error processing expected calls.\n'
'Errors: {}').format(
[e if isinstance(e, Exception) else None
for e in expected])
> raise AssertionError(
f'{problem}\n'
f'Expected: {_CallList(calls)}'
f'{self._calls_repr(prefix="Actual").rstrip(".")}'
) from cause
E AssertionError: Calls not found.
E Expected: [call(method='POST', path='/projects/PROJECT/jobs', data={'jobReference': {'jobId': '1', 'projectId': 'PROJECT'}, 'configuration': {'query': {'useLegacySql': False, 'query': 'select 1'}}}, timeout=None),
E call(method='GET', path='/projects/PROJECT/jobs/1', query_params={'location': 'test-loc', 'projection': 'full'}, timeout=128),
E call(method='GET', path='/projects/PROJECT/queries/1', query_params={'maxResults': 0, 'location': 'test-loc'}, timeout=None),
E call(method='GET', path='/projects/PROJECT/queries/1', query_params={'maxResults': 0, 'location': 'test-loc'}, timeout=None),
E call(method='GET', path='/projects/PROJECT/jobs/1', query_params={'location': 'test-loc', 'projection': 'full'}, timeout=128),
E call(method='GET', path='/projects/PROJECT/queries/1', query_params={'maxResults': 0, 'location': 'test-loc'}, timeout=None)]
E Actual: [call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='POST', path='/projects/PROJECT/jobs', data={'jobReference': {'jobId': '1', 'projectId': 'PROJECT'}, 'configuration': {'query': {'useLegacySql': False, 'query': 'select 1'}}}, timeout=None),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/PROJECT/jobs/1', query_params={'projection': 'full', 'location': 'test-loc'}, timeout=128),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/PROJECT/queries/1', query_params={'maxResults': 0, 'location': 'test-loc'}, timeout=None),
E call(method='GET', path='/projects/PROJECT/queries/1', query_params={'maxResults': 0, 'location': 'test-loc'}, timeout=None),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/PROJECT/jobs/1', query_params={'projection': 'full', 'location': 'test-loc'}, timeout=128),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/PROJECT/queries/1', query_params={'maxResults': 0, 'location': 'test-loc'}, timeout=None)]

/usr/local/lib/python3.8/unittest/mock.py:950: AssertionError
_ test_query_retry_with_default_retry_and_ambiguous_errors_only_retries_with_failed_job _

client =
monkeypatch = <_pytest.monkeypatch.MonkeyPatch object at 0x7fe8cf448610>

def test_query_retry_with_default_retry_and_ambiguous_errors_only_retries_with_failed_job(
client, monkeypatch
):
"""
Some errors like 'rateLimitExceeded' can be ambiguous. Make sure we only
retry the job when we know for sure that the job has failed for a retriable
reason. We can only be sure after a "successful" call to jobs.get to fetch
the failed job status.
"""
job_counter = 0

def make_job_id(*args, **kwargs):
nonlocal job_counter
job_counter += 1
return f"{job_counter}"

monkeypatch.setattr(_job_helpers, "make_job_id", make_job_id)

project = client.project
job_reference_1 = {"projectId": project, "jobId": "1", "location": "test-loc"}
job_reference_2 = {"projectId": project, "jobId": "2", "location": "test-loc"}
NUM_API_RETRIES = 2

# This error is modeled after a real customer exception in
# https://github.com/googleapis/python-bigquery/issues/707.
internal_error = google.api_core.exceptions.InternalServerError(
"Job failed just because...",
errors=[
{"reason": "internalError"},
],
)
responses = [
# jobs.insert
{"jobReference": job_reference_1, "status": {"state": "PENDING"}},
# jobs.get
{"jobReference": job_reference_1, "status": {"state": "RUNNING"}},
# jobs.getQueryResults x2
#
# Note: internalError is ambiguous in jobs.getQueryResults. The
# problem could be at the Google Frontend level or it could be because
# the job has failed due to some transient issues and the BigQuery
# REST API is translating the job failed status into failure HTTP
# codes.
#
# TODO(GH#1903): We shouldn't retry nearly this many times when we get
# ambiguous errors from jobs.getQueryResults.
# See: https://github.com/googleapis/python-bigquery/issues/1903
internal_error,
internal_error,
# jobs.get -- the job has failed
{
"jobReference": job_reference_1,
"status": {"state": "DONE", "errorResult": {"reason": "internalError"}},
},
# jobs.insert
{"jobReference": job_reference_2, "status": {"state": "PENDING"}},
# jobs.get
{"jobReference": job_reference_2, "status": {"state": "RUNNING"}},
# jobs.getQueryResults
{"jobReference": job_reference_2, "jobComplete": True},
# jobs.get
{"jobReference": job_reference_2, "status": {"state": "DONE"}},
]

conn = client._connection = make_connection(*responses)

with freezegun.freeze_time(
# Note: because of exponential backoff and a bit of jitter,
# NUM_API_RETRIES will get less accurate the greater the value.
# We add 1 because we know there will be at least some additional
# calls to fetch the time / sleep before the retry deadline is hit.
auto_tick_seconds=(
google.cloud.bigquery.retry._DEFAULT_RETRY_DEADLINE / NUM_API_RETRIES
)
+ 1,
):
job = client.query("select 1")
job.result()

> conn.api_request.assert_has_calls(
[
# jobs.insert
mock.call(
method="POST",
path="/projects/PROJECT/jobs",
data={
"jobReference": {"jobId": "1", "projectId": "PROJECT"},
"configuration": {
"query": {"useLegacySql": False, "query": "select 1"}
},
},
timeout=None,
),
# jobs.get
mock.call(
method="GET",
path="/projects/PROJECT/jobs/1",
query_params={"location": "test-loc", "projection": "full"},
timeout=google.cloud.bigquery.retry.DEFAULT_GET_JOB_TIMEOUT,
),
# jobs.getQueryResults x2
mock.call(
method="GET",
path="/projects/PROJECT/queries/1",
query_params={"maxResults": 0, "location": "test-loc"},
timeout=None,
),
mock.call(
method="GET",
path="/projects/PROJECT/queries/1",
query_params={"maxResults": 0, "location": "test-loc"},
timeout=None,
),
# jobs.get -- verify that the job has failed
mock.call(
method="GET",
path="/projects/PROJECT/jobs/1",
query_params={"location": "test-loc", "projection": "full"},
timeout=google.cloud.bigquery.retry.DEFAULT_GET_JOB_TIMEOUT,
),
# jobs.insert
mock.call(
method="POST",
path="/projects/PROJECT/jobs",
data={
"jobReference": {
# Make sure that we generated a new job ID.
"jobId": "2",
"projectId": "PROJECT",
},
"configuration": {
"query": {"useLegacySql": False, "query": "select 1"}
},
},
timeout=None,
),
# jobs.get
mock.call(
method="GET",
path="/projects/PROJECT/jobs/2",
query_params={"location": "test-loc", "projection": "full"},
timeout=google.cloud.bigquery.retry.DEFAULT_GET_JOB_TIMEOUT,
),
# jobs.getQueryResults
mock.call(
method="GET",
path="/projects/PROJECT/queries/2",
query_params={"maxResults": 0, "location": "test-loc"},
timeout=None,
),
# jobs.get
mock.call(
method="GET",
path="/projects/PROJECT/jobs/2",
query_params={"location": "test-loc", "projection": "full"},
timeout=google.cloud.bigquery.retry.DEFAULT_GET_JOB_TIMEOUT,
),
]
)

tests/unit/test_job_retry.py:335:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self =
calls = [call(method='POST', path='/projects/PROJECT/jobs', data={'jobReference': {'jobId': '1', 'projectId': 'PROJECT'}, 'con...'projectId': 'PROJECT'}, 'configuration': {'query': {'useLegacySql': False, 'query': 'select 1'}}}, timeout=None), ...]
any_order = False

def assert_has_calls(self, calls, any_order=False):
"""assert the mock has been called with the specified calls.
The `mock_calls` list is checked for the calls.

If `any_order` is False (the default) then the calls must be
sequential. There can be extra calls before or after the
specified calls.

If `any_order` is True then the calls can be in any order, but
they must all appear in `mock_calls`."""
expected = [self._call_matcher(c) for c in calls]
cause = next((e for e in expected if isinstance(e, Exception)), None)
all_calls = _CallList(self._call_matcher(c) for c in self.mock_calls)
if not any_order:
if expected not in all_calls:
if cause is None:
problem = 'Calls not found.'
else:
problem = ('Error processing expected calls.\n'
'Errors: {}').format(
[e if isinstance(e, Exception) else None
for e in expected])
> raise AssertionError(
f'{problem}\n'
f'Expected: {_CallList(calls)}'
f'{self._calls_repr(prefix="Actual").rstrip(".")}'
) from cause
E AssertionError: Calls not found.
E Expected: [call(method='POST', path='/projects/PROJECT/jobs', data={'jobReference': {'jobId': '1', 'projectId': 'PROJECT'}, 'configuration': {'query': {'useLegacySql': False, 'query': 'select 1'}}}, timeout=None),
E call(method='GET', path='/projects/PROJECT/jobs/1', query_params={'location': 'test-loc', 'projection': 'full'}, timeout=128),
E call(method='GET', path='/projects/PROJECT/queries/1', query_params={'maxResults': 0, 'location': 'test-loc'}, timeout=None),
E call(method='GET', path='/projects/PROJECT/queries/1', query_params={'maxResults': 0, 'location': 'test-loc'}, timeout=None),
E call(method='GET', path='/projects/PROJECT/jobs/1', query_params={'location': 'test-loc', 'projection': 'full'}, timeout=128),
E call(method='POST', path='/projects/PROJECT/jobs', data={'jobReference': {'jobId': '2', 'projectId': 'PROJECT'}, 'configuration': {'query': {'useLegacySql': False, 'query': 'select 1'}}}, timeout=None),
E call(method='GET', path='/projects/PROJECT/jobs/2', query_params={'location': 'test-loc', 'projection': 'full'}, timeout=128),
E call(method='GET', path='/projects/PROJECT/queries/2', query_params={'maxResults': 0, 'location': 'test-loc'}, timeout=None),
E call(method='GET', path='/projects/PROJECT/jobs/2', query_params={'location': 'test-loc', 'projection': 'full'}, timeout=128)]
E Actual: [call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='POST', path='/projects/PROJECT/jobs', data={'jobReference': {'jobId': '1', 'projectId': 'PROJECT'}, 'configuration': {'query': {'useLegacySql': False, 'query': 'select 1'}}}, timeout=None),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/PROJECT/jobs/1', query_params={'projection': 'full', 'location': 'test-loc'}, timeout=128),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/PROJECT/queries/1', query_params={'maxResults': 0, 'location': 'test-loc'}, timeout=None),
E call(method='GET', path='/projects/PROJECT/queries/1', query_params={'maxResults': 0, 'location': 'test-loc'}, timeout=None),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/PROJECT/jobs/1', query_params={'projection': 'full', 'location': 'test-loc'}, timeout=128),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='POST', path='/projects/PROJECT/jobs', data={'jobReference': {'jobId': '2', 'projectId': 'PROJECT'}, 'configuration': {'query': {'useLegacySql': False, 'query': 'select 1'}}}, timeout=None),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/PROJECT/jobs/2', query_params={'projection': 'full', 'location': 'test-loc'}, timeout=128),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/PROJECT/queries/2', query_params={'maxResults': 0, 'location': 'test-loc'}, timeout=None),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/PROJECT/jobs/2', query_params={'projection': 'full', 'location': 'test-loc'}, timeout=128)]

/usr/local/lib/python3.8/unittest/mock.py:950: AssertionError
_____________________ test_context_with_default_connection _____________________

monkeypatch = <_pytest.monkeypatch.MonkeyPatch object at 0x7fe8cea2be80>

@pytest.mark.usefixtures("ipython_interactive")
@pytest.mark.skipif(pandas is None, reason="Requires `pandas`")
def test_context_with_default_connection(monkeypatch):
ip = IPython.get_ipython()
monkeypatch.setattr(bigquery, "bigquery_magics", None)
bigquery.load_ipython_extension(ip)
magics.context._credentials = None
magics.context._project = None
magics.context._connection = None

default_credentials = mock.create_autospec(
google.auth.credentials.Credentials, instance=True
)
credentials_patch = mock.patch(
"google.auth.default", return_value=(default_credentials, "project-from-env")
)
default_conn = make_connection(QUERY_RESOURCE, QUERY_RESULTS_RESOURCE)
conn_patch = mock.patch("google.cloud.bigquery.client.Connection", autospec=True)
list_rows_patch = mock.patch(
"google.cloud.bigquery.client.Client._list_rows_from_query_results",
return_value=google.cloud.bigquery.table._EmptyRowIterator(),
)

with conn_patch as conn, credentials_patch, list_rows_patch as list_rows:
conn.return_value = default_conn
ip.run_cell_magic("bigquery", "", QUERY_STRING)

# Check that query actually starts the job.
conn.assert_called()
list_rows.assert_called()
begin_call = mock.call(
method="POST",
path="/projects/project-from-env/jobs",
data=mock.ANY,
timeout=DEFAULT_TIMEOUT,
)
query_results_call = mock.call(
method="GET",
path=f"/projects/{PROJECT_ID}/queries/{JOB_ID}",
query_params=mock.ANY,
timeout=mock.ANY,
)
> default_conn.api_request.assert_has_calls([begin_call, query_results_call])

tests/unit/test_magics.py:198:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self =
calls = [call(method='POST', path='/projects/project-from-env/jobs', data=, timeout=None), call(method='GET', path='/projects/its-a-project-eh/queries/some-random-id', query_params=, timeout=)]
any_order = False

def assert_has_calls(self, calls, any_order=False):
"""assert the mock has been called with the specified calls.
The `mock_calls` list is checked for the calls.

If `any_order` is False (the default) then the calls must be
sequential. There can be extra calls before or after the
specified calls.

If `any_order` is True then the calls can be in any order, but
they must all appear in `mock_calls`."""
expected = [self._call_matcher(c) for c in calls]
cause = next((e for e in expected if isinstance(e, Exception)), None)
all_calls = _CallList(self._call_matcher(c) for c in self.mock_calls)
if not any_order:
if expected not in all_calls:
if cause is None:
problem = 'Calls not found.'
else:
problem = ('Error processing expected calls.\n'
'Errors: {}').format(
[e if isinstance(e, Exception) else None
for e in expected])
> raise AssertionError(
f'{problem}\n'
f'Expected: {_CallList(calls)}'
f'{self._calls_repr(prefix="Actual").rstrip(".")}'
) from cause
E AssertionError: Calls not found.
E Expected: [call(method='POST', path='/projects/project-from-env/jobs', data=, timeout=None),
E call(method='GET', path='/projects/its-a-project-eh/queries/some-random-id', query_params=, timeout=)]
E Actual: [call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='POST', path='/projects/project-from-env/jobs', data={'jobReference': {'jobId': 'a751bbee-1341-484f-9409-fe40facc6b72', 'projectId': 'project-from-env'}, 'configuration': {'query': {'queryParameters': [], 'useLegacySql': False, 'query': 'SELECT 42 AS the_answer FROM `life.the_universe.and_everything`;'}, 'dryRun': False}}, timeout=None),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/its-a-project-eh/queries/some-random-id', query_params={'maxResults': 0}, timeout=120)]

/usr/local/lib/python3.8/unittest/mock.py:950: AssertionError
----------------------------- Captured stdout call -----------------------------
Executing query with job ID: some-random-id
Query executing: 0.00s
Job ID some-random-id successfully executed
Query is running: 0%| |
_____________________ test_context_with_custom_connection ______________________

monkeypatch = <_pytest.monkeypatch.MonkeyPatch object at 0x7fe8ce79aac0>

@pytest.mark.usefixtures("ipython_interactive")
@pytest.mark.skipif(pandas is None, reason="Requires `pandas`")
def test_context_with_custom_connection(monkeypatch):
ip = IPython.get_ipython()
monkeypatch.setattr(bigquery, "bigquery_magics", None)
bigquery.load_ipython_extension(ip)
magics.context._project = None
magics.context._credentials = None
context_conn = magics.context._connection = make_connection(
QUERY_RESOURCE, QUERY_RESULTS_RESOURCE
)

default_credentials = mock.create_autospec(
google.auth.credentials.Credentials, instance=True
)
credentials_patch = mock.patch(
"google.auth.default", return_value=(default_credentials, "project-from-env")
)
default_conn = make_connection()
conn_patch = mock.patch("google.cloud.bigquery.client.Connection", autospec=True)
list_rows_patch = mock.patch(
"google.cloud.bigquery.client.Client._list_rows_from_query_results",
return_value=google.cloud.bigquery.table._EmptyRowIterator(),
)

with conn_patch as conn, credentials_patch, list_rows_patch as list_rows:
conn.return_value = default_conn
ip.run_cell_magic("bigquery", "", QUERY_STRING)

list_rows.assert_called()
default_conn.api_request.assert_not_called()
begin_call = mock.call(
method="POST",
path="/projects/project-from-env/jobs",
data=mock.ANY,
timeout=DEFAULT_TIMEOUT,
)
query_results_call = mock.call(
method="GET",
path=f"/projects/{PROJECT_ID}/queries/{JOB_ID}",
query_params=mock.ANY,
timeout=mock.ANY,
)
> context_conn.api_request.assert_has_calls([begin_call, query_results_call])

tests/unit/test_magics.py:263:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self =
calls = [call(method='POST', path='/projects/project-from-env/jobs', data=, timeout=None), call(method='GET', path='/projects/its-a-project-eh/queries/some-random-id', query_params=, timeout=)]
any_order = False

def assert_has_calls(self, calls, any_order=False):
"""assert the mock has been called with the specified calls.
The `mock_calls` list is checked for the calls.

If `any_order` is False (the default) then the calls must be
sequential. There can be extra calls before or after the
specified calls.

If `any_order` is True then the calls can be in any order, but
they must all appear in `mock_calls`."""
expected = [self._call_matcher(c) for c in calls]
cause = next((e for e in expected if isinstance(e, Exception)), None)
all_calls = _CallList(self._call_matcher(c) for c in self.mock_calls)
if not any_order:
if expected not in all_calls:
if cause is None:
problem = 'Calls not found.'
else:
problem = ('Error processing expected calls.\n'
'Errors: {}').format(
[e if isinstance(e, Exception) else None
for e in expected])
> raise AssertionError(
f'{problem}\n'
f'Expected: {_CallList(calls)}'
f'{self._calls_repr(prefix="Actual").rstrip(".")}'
) from cause
E AssertionError: Calls not found.
E Expected: [call(method='POST', path='/projects/project-from-env/jobs', data=, timeout=None),
E call(method='GET', path='/projects/its-a-project-eh/queries/some-random-id', query_params=, timeout=)]
E Actual: [call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='POST', path='/projects/project-from-env/jobs', data={'jobReference': {'jobId': '54801d23-0ba8-4837-887f-68a8c32d6e8f', 'projectId': 'project-from-env'}, 'configuration': {'query': {'queryParameters': [], 'useLegacySql': False, 'query': 'SELECT 42 AS the_answer FROM `life.the_universe.and_everything`;'}, 'dryRun': False}}, timeout=None),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/its-a-project-eh/queries/some-random-id', query_params={'maxResults': 0}, timeout=120)]

/usr/local/lib/python3.8/unittest/mock.py:950: AssertionError
----------------------------- Captured stdout call -----------------------------
Executing query with job ID: some-random-id
Query executing: 0.00s
Job ID some-random-id successfully executed
Query is running: 0%| |
=============================== warnings summary ===============================
.nox/unit-3-8/lib/python3.8/site-packages/pandas/compat/numpy/__init__.py:10
/tmpfs/src/github/python-bigquery/.nox/unit-3-8/lib/python3.8/site-packages/pandas/compat/numpy/__init__.py:10: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
_nlv = LooseVersion(_np_version)

.nox/unit-3-8/lib/python3.8/site-packages/pandas/compat/numpy/__init__.py:11
/tmpfs/src/github/python-bigquery/.nox/unit-3-8/lib/python3.8/site-packages/pandas/compat/numpy/__init__.py:11: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
np_version_under1p17 = _nlv < LooseVersion("1.17")

.nox/unit-3-8/lib/python3.8/site-packages/pandas/compat/numpy/__init__.py:12
/tmpfs/src/github/python-bigquery/.nox/unit-3-8/lib/python3.8/site-packages/pandas/compat/numpy/__init__.py:12: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
np_version_under1p18 = _nlv < LooseVersion("1.18")

.nox/unit-3-8/lib/python3.8/site-packages/pandas/compat/numpy/__init__.py:13
/tmpfs/src/github/python-bigquery/.nox/unit-3-8/lib/python3.8/site-packages/pandas/compat/numpy/__init__.py:13: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
_np_version_under1p19 = _nlv < LooseVersion("1.19")

.nox/unit-3-8/lib/python3.8/site-packages/pandas/compat/numpy/__init__.py:14
/tmpfs/src/github/python-bigquery/.nox/unit-3-8/lib/python3.8/site-packages/pandas/compat/numpy/__init__.py:14: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
_np_version_under1p20 = _nlv < LooseVersion("1.20")

.nox/unit-3-8/lib/python3.8/site-packages/setuptools/_distutils/version.py:345
/tmpfs/src/github/python-bigquery/.nox/unit-3-8/lib/python3.8/site-packages/setuptools/_distutils/version.py:345: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
other = LooseVersion(other)

.nox/unit-3-8/lib/python3.8/site-packages/pandas/compat/numpy/function.py:120
.nox/unit-3-8/lib/python3.8/site-packages/pandas/compat/numpy/function.py:120
/tmpfs/src/github/python-bigquery/.nox/unit-3-8/lib/python3.8/site-packages/pandas/compat/numpy/function.py:120: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
if LooseVersion(__version__) >= LooseVersion("1.17.0"):

google/cloud/bigquery/__init__.py:128
/tmpfs/src/github/python-bigquery/google/cloud/bigquery/__init__.py:128: PendingDeprecationWarning: The python-bigquery library will stop supporting Python 3.7 and Python 3.8 in a future major release expected in Q4 2024. Your Python version is 3.8.18. We recommend that you update soon to ensure ongoing support. For more details, see: [Google Cloud Client Libraries Supported Python Versions policy](https://cloud.google.com/python/docs/supported-python-versions)
warnings.warn(

tests/unit/test_opentelemetry_tracing.py:47
/tmpfs/src/github/python-bigquery/tests/unit/test_opentelemetry_tracing.py:47: PytestRemovedIn9Warning: Marks applied to fixtures have no effect
See docs: https://docs.pytest.org/en/stable/deprecations.html#applying-a-mark-to-a-fixture-function
def setup():

tests/unit/job/test_base.py: 5 warnings
tests/unit/job/test_copy.py: 6 warnings
tests/unit/job/test_extract.py: 6 warnings
tests/unit/job/test_load.py: 14 warnings
tests/unit/job/test_query.py: 27 warnings
tests/unit/job/test_query_pandas.py: 17 warnings
tests/unit/test__job_helpers.py: 2 warnings
tests/unit/test_client.py: 128 warnings
tests/unit/test_create_dataset.py: 16 warnings
tests/unit/test_delete_dataset.py: 10 warnings
tests/unit/test_job_retry.py: 11 warnings
tests/unit/test_list_datasets.py: 4 warnings
tests/unit/test_list_jobs.py: 8 warnings
tests/unit/test_list_models.py: 9 warnings
tests/unit/test_list_projects.py: 4 warnings
tests/unit/test_list_routines.py: 9 warnings
tests/unit/test_list_tables.py: 13 warnings
tests/unit/test_magics.py: 8 warnings
/tmpfs/src/github/python-bigquery/.nox/unit-3-8/lib/python3.8/site-packages/google/api_core/retry/retry_unary.py:281: UserWarning: Using the synchronous google.api_core.retry.Retry with asynchronous calls may lead to unexpected results. Please use google.api_core.retry_async.AsyncRetry instead.
warnings.warn(_ASYNC_RETRY_WARNING)

tests/unit/job/test_query_pandas.py: 18 warnings
tests/unit/test_table.py: 26 warnings
tests/unit/test_table_pandas.py: 4 warnings
/tmpfs/src/github/python-bigquery/google/cloud/bigquery/table.py:2309: UserWarning: Unable to represent RANGE schema as struct using pandas ArrowDtype. Using `object` instead. To use ArrowDtype, use pandas >= 1.5 and pyarrow >= 10.0.1.
warnings.warn(_RANGE_PYARROW_WARNING)

tests/unit/job/test_query_pandas.py: 18 warnings
tests/unit/test_table.py: 26 warnings
tests/unit/test_table_pandas.py: 4 warnings
/tmpfs/src/github/python-bigquery/google/cloud/bigquery/table.py:2323: UserWarning: Unable to represent RANGE schema as struct using pandas ArrowDtype. Using `object` instead. To use ArrowDtype, use pandas >= 1.5 and pyarrow >= 10.0.1.
warnings.warn(_RANGE_PYARROW_WARNING)

tests/unit/job/test_query_pandas.py: 18 warnings
tests/unit/test_table.py: 26 warnings
tests/unit/test_table_pandas.py: 4 warnings
/tmpfs/src/github/python-bigquery/google/cloud/bigquery/table.py:2337: UserWarning: Unable to represent RANGE schema as struct using pandas ArrowDtype. Using `object` instead. To use ArrowDtype, use pandas >= 1.5 and pyarrow >= 10.0.1.
warnings.warn(_RANGE_PYARROW_WARNING)

tests/unit/job/test_query_pandas.py::test_to_dataframe_bqstorage_preserve_order[select name, age from table order by other_column;]
tests/unit/job/test_query_pandas.py::test_to_dataframe_bqstorage_preserve_order[select name, age from table order by other_column;]
/tmpfs/src/github/python-bigquery/.nox/unit-3-8/lib/python3.8/site-packages/pandas/core/arrays/_arrow_utils.py:9: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
_pyarrow_version_ge_015 = LooseVersion(pyarrow.__version__) >= LooseVersion("0.15")

tests/unit/test_magics.py: 59 warnings
/tmpfs/src/github/python-bigquery/google/cloud/bigquery/__init__.py:237: FutureWarning: %load_ext google.cloud.bigquery is deprecated. Install bigquery-magics package and use `%load_ext bigquery_magics`, instead.
warnings.warn(

tests/unit/test_table.py::TestTable::test_to_api_repr_w_unsetting_expiration
/tmpfs/src/github/python-bigquery/tests/unit/test_table.py:1170: PendingDeprecationWarning: This method will be deprecated in future versions. Please use Table.time_partitioning.expiration_ms instead.
table.partition_expiration = None

tests/unit/test_table.py::TestRowIterator::test_to_arrow_ensure_bqstorage_client_wo_bqstorage
/tmpfs/src/github/python-bigquery/google/cloud/bigquery/table.py:1733: UserWarning: no bqstorage
warnings.warn(str(exc))

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=========================== short test summary info ============================
FAILED tests/unit/job/test_base.py::Test_AsyncJob::test_result_default_wo_state
FAILED tests/unit/job/test_base.py::Test_AsyncJob::test_result_w_retry_wo_state
FAILED tests/unit/job/test_query.py::TestQueryJob::test_result_begin_job_if_not_exist
FAILED tests/unit/job/test_query.py::TestQueryJob::test_result_reloads_job_state_until_done
FAILED tests/unit/job/test_query.py::TestQueryJob::test_result_w_custom_retry
FAILED tests/unit/job/test_query.py::TestQueryJob::test_result_w_page_size - ...
FAILED tests/unit/job/test_query.py::TestQueryJob::test_result_w_timeout_doesnt_raise
FAILED tests/unit/job/test_query.py::TestQueryJob::test_result_w_timeout_raises_concurrent_futures_timeout
FAILED tests/unit/job/test_query.py::TestQueryJob::test_result_with_done_job_calls_get_query_results
FAILED tests/unit/job/test_query.py::TestQueryJob::test_result_with_max_results
FAILED tests/unit/test_client.py::TestClient::test_create_routine_w_conflict_exists_ok
FAILED tests/unit/test_client.py::TestClient::test_create_table_alreadyexists_w_exists_ok_true
FAILED tests/unit/test_client.py::TestClient::test_list_rows_w_start_index_w_page_size
FAILED tests/unit/test_client.py::TestClient::test_query_and_wait_w_page_size_multiple_requests
FAILED tests/unit/test_create_dataset.py::test_create_dataset_alreadyexists_w_exists_ok_true
FAILED tests/unit/test_job_retry.py::test_retry_connection_error_with_default_retries_and_successful_first_job
FAILED tests/unit/test_job_retry.py::test_query_retry_with_default_retry_and_ambiguous_errors_only_retries_with_failed_job
FAILED tests/unit/test_magics.py::test_context_with_default_connection - Asse...
FAILED tests/unit/test_magics.py::test_context_with_custom_connection - Asser...
19 failed, 2387 passed, 3 skipped, 514 warnings in 66.22s (0:01:06)
nox > Command py.test --quiet '-W default::PendingDeprecationWarning' --cov=google/cloud/bigquery --cov=tests/unit --cov-append --cov-config=.coveragerc --cov-report= --cov-fail-under=0 tests/unit failed with exit code 1
nox > Session unit-3.8 failed.
nox > Running session unit-3.12
nox > Creating virtual environment (virtualenv) using python3.12 in .nox/unit-3-12
nox > python -m pip install pytest google-cloud-testutils pytest-cov freezegun -c /tmpfs/src/github/python-bigquery/testing/constraints-3.12.txt
nox > python -m pip install -e '.[bqstorage,ipywidgets,pandas,tqdm,opentelemetry]' -c /tmpfs/src/github/python-bigquery/testing/constraints-3.12.txt
nox > python -m pip freeze
asttokens==2.4.1
cachetools==5.5.0
certifi==2024.8.30
charset-normalizer==3.4.0
click==8.1.7
comm==0.2.2
coverage==7.6.4
db-dtypes==1.3.0
debugpy==1.8.7
decorator==5.1.1
Deprecated==1.2.14
executing==2.1.0
freezegun==1.5.1
google-api-core @ git+https://github.com/googleapis/python-api-core.git@0d5ed37c96f9b40bccae98e228163a88abeb1763
google-auth==2.35.0
# Editable Git install with no remote (google-cloud-bigquery==3.26.0)
-e /tmpfs/src/github/python-bigquery
google-cloud-bigquery-storage==2.27.0
google-cloud-core==2.4.1
google-cloud-testutils==1.4.0
google-crc32c==1.6.0
google-resumable-media==2.7.2
googleapis-common-protos==1.65.0
grpcio==1.67.0
grpcio-status==1.67.0
idna==3.10
importlib_metadata==8.4.0
iniconfig==2.0.0
ipykernel==6.29.5
ipython==8.28.0
ipywidgets==8.1.5
jedi==0.19.1
jupyter_client==8.6.3
jupyter_core==5.7.2
jupyterlab_widgets==3.0.13
matplotlib-inline==0.1.7
nest-asyncio==1.6.0
numpy==2.1.2
opentelemetry-api==1.27.0
opentelemetry-instrumentation==0.48b0
opentelemetry-sdk==1.27.0
opentelemetry-semantic-conventions==0.48b0
packaging==24.1
pandas==2.2.3
parso==0.8.4
pexpect==4.9.0
platformdirs==4.3.6
pluggy==1.5.0
prompt_toolkit==3.0.48
proto-plus==1.24.0
protobuf==5.28.2
psutil==6.1.0
ptyprocess==0.7.0
pure_eval==0.2.3
pyarrow==17.0.0
pyasn1==0.6.1
pyasn1_modules==0.4.1
Pygments==2.18.0
pytest==8.3.3
pytest-cov==5.0.0
python-dateutil==2.9.0.post0
pytz==2024.2
pyzmq==26.2.0
requests==2.32.3
rsa==4.9
six==1.16.0
stack-data==0.6.3
tornado==6.4.1
tqdm==4.66.5
traitlets==5.14.3
typing_extensions==4.12.2
tzdata==2024.2
urllib3==2.2.3
wcwidth==0.2.13
widgetsnbextension==4.0.13
wrapt==1.16.0
zipp==3.20.2
nox > py.test --quiet '-W default::PendingDeprecationWarning' --cov=google/cloud/bigquery --cov=tests/unit --cov-append --cov-config=.coveragerc --cov-report= --cov-fail-under=0 tests/unit
........................................................................ [ 2%]
..................................F.F................................... [ 5%]
........................................................................ [ 8%]
........................................................................ [ 11%]
........................................................F..F.F.FFFF.F... [ 14%]
............................................................s.....sss... [ 17%]
........................................................................ [ 20%]
........................................................................ [ 23%]
........................................................................ [ 26%]
........................................................................ [ 29%]
........................................................................ [ 32%]
........................................................................ [ 35%]
........................ss.......sss................................s... [ 38%]
................................s.....s..........s...................... [ 41%]
.....................F..F............................................... [ 44%]
.......................................................F..............F. [ 47%]
........................................................................ [ 50%]
................................................F....................... [ 53%]
........................................................................ [ 56%]
........................................................................ [ 59%]
........................................................................ [ 62%]
........................................................................ [ 65%]
...............................................................FF....... [ 68%]
...................................................F.F.................. [ 71%]
........................................................................ [ 74%]
........................................................................ [ 77%]
........................................................................ [ 80%]
........................................................................ [ 83%]
........................................................................ [ 86%]
........................................................................ [ 89%]
........................................................................ [ 92%]
................s...............................s...............s.....s. [ 95%]
...........................s..sss.ss.................................... [ 98%]
.............................s... [100%]
=================================== FAILURES ===================================
__________________ Test_AsyncJob.test_result_default_wo_state __________________

self =

def test_result_default_wo_state(self):
from google.cloud.bigquery.retry import DEFAULT_GET_JOB_TIMEOUT

begun_job_resource = _make_job_resource(
job_id=self.JOB_ID, project_id=self.PROJECT, location="US", started=True
)
done_job_resource = _make_job_resource(
job_id=self.JOB_ID,
project_id=self.PROJECT,
location="US",
started=True,
ended=True,
)
conn = make_connection(
_make_retriable_exception(),
begun_job_resource,
done_job_resource,
)
client = _make_client(project=self.PROJECT, connection=conn)
job = self._make_one(self.JOB_ID, client)

self.assertIs(job.result(retry=polling.DEFAULT_RETRY), job)

begin_call = mock.call(
method="POST",
path=f"/projects/{self.PROJECT}/jobs",
data={"jobReference": {"jobId": self.JOB_ID, "projectId": self.PROJECT}},
timeout=None,
)
reload_call = mock.call(
method="GET",
path=f"/projects/{self.PROJECT}/jobs/{self.JOB_ID}",
query_params={
"projection": "full",
"location": "US",
},
timeout=DEFAULT_GET_JOB_TIMEOUT,
)
> conn.api_request.assert_has_calls([begin_call, begin_call, reload_call])

tests/unit/job/test_base.py:1060:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self =
calls = [call(method='POST', path='/projects/test-project-123/jobs', data={'jobReference': {'jobId': 'job-id', 'projectId': 't...T', path='/projects/test-project-123/jobs/job-id', query_params={'projection': 'full', 'location': 'US'}, timeout=128)]
any_order = False

def assert_has_calls(self, calls, any_order=False):
"""assert the mock has been called with the specified calls.
The `mock_calls` list is checked for the calls.

If `any_order` is False (the default) then the calls must be
sequential. There can be extra calls before or after the
specified calls.

If `any_order` is True then the calls can be in any order, but
they must all appear in `mock_calls`."""
expected = [self._call_matcher(c) for c in calls]
cause = next((e for e in expected if isinstance(e, Exception)), None)
all_calls = _CallList(self._call_matcher(c) for c in self.mock_calls)
if not any_order:
if expected not in all_calls:
if cause is None:
problem = 'Calls not found.'
else:
problem = ('Error processing expected calls.\n'
'Errors: {}').format(
[e if isinstance(e, Exception) else None
for e in expected])
> raise AssertionError(
f'{problem}\n'
f'Expected: {_CallList(calls)}'
f'{self._calls_repr(prefix="Actual").rstrip(".")}'
) from cause
E AssertionError: Calls not found.
E Expected: [call(method='POST', path='/projects/test-project-123/jobs', data={'jobReference': {'jobId': 'job-id', 'projectId': 'test-project-123'}}, timeout=None),
E call(method='POST', path='/projects/test-project-123/jobs', data={'jobReference': {'jobId': 'job-id', 'projectId': 'test-project-123'}}, timeout=None),
E call(method='GET', path='/projects/test-project-123/jobs/job-id', query_params={'projection': 'full', 'location': 'US'}, timeout=128)]
E Actual: [call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='POST', path='/projects/test-project-123/jobs', data={'jobReference': {'jobId': 'job-id', 'projectId': 'test-project-123'}}, timeout=None),
E call(method='POST', path='/projects/test-project-123/jobs', data={'jobReference': {'jobId': 'job-id', 'projectId': 'test-project-123'}}, timeout=None),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/test-project-123/jobs/job-id', query_params={'projection': 'full', 'location': 'US'}, timeout=128)]

/usr/local/lib/python3.12/unittest/mock.py:981: AssertionError
__________________ Test_AsyncJob.test_result_w_retry_wo_state __________________

self =

def test_result_w_retry_wo_state(self):
from google.cloud.bigquery.retry import DEFAULT_GET_JOB_TIMEOUT

begun_job_resource = _make_job_resource(
job_id=self.JOB_ID, project_id=self.PROJECT, location="EU", started=True
)
done_job_resource = _make_job_resource(
job_id=self.JOB_ID,
project_id=self.PROJECT,
location="EU",
started=True,
ended=True,
)
conn = make_connection(
exceptions.NotFound("not normally retriable"),
begun_job_resource,
exceptions.NotFound("not normally retriable"),
done_job_resource,
)
client = _make_client(project=self.PROJECT, connection=conn)
job = self._make_one(
self._job_reference(self.JOB_ID, self.PROJECT, "EU"), client
)
custom_predicate = mock.Mock()
custom_predicate.return_value = True
custom_retry = google.api_core.retry.Retry(
predicate=custom_predicate,
initial=0.001,
maximum=0.001,
deadline=0.1,
)
self.assertIs(job.result(retry=custom_retry), job)

begin_call = mock.call(
method="POST",
path=f"/projects/{self.PROJECT}/jobs",
data={
"jobReference": {
"jobId": self.JOB_ID,
"projectId": self.PROJECT,
"location": "EU",
}
},
timeout=None,
)
reload_call = mock.call(
method="GET",
path=f"/projects/{self.PROJECT}/jobs/{self.JOB_ID}",
query_params={
"projection": "full",
"location": "EU",
},
timeout=DEFAULT_GET_JOB_TIMEOUT,
)
> conn.api_request.assert_has_calls(
[begin_call, begin_call, reload_call, reload_call]
)

tests/unit/job/test_base.py:1116:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self =
calls = [call(method='POST', path='/projects/test-project-123/jobs', data={'jobReference': {'jobId': 'job-id', 'projectId': 't...T', path='/projects/test-project-123/jobs/job-id', query_params={'projection': 'full', 'location': 'EU'}, timeout=128)]
any_order = False

def assert_has_calls(self, calls, any_order=False):
"""assert the mock has been called with the specified calls.
The `mock_calls` list is checked for the calls.

If `any_order` is False (the default) then the calls must be
sequential. There can be extra calls before or after the
specified calls.

If `any_order` is True then the calls can be in any order, but
they must all appear in `mock_calls`."""
expected = [self._call_matcher(c) for c in calls]
cause = next((e for e in expected if isinstance(e, Exception)), None)
all_calls = _CallList(self._call_matcher(c) for c in self.mock_calls)
if not any_order:
if expected not in all_calls:
if cause is None:
problem = 'Calls not found.'
else:
problem = ('Error processing expected calls.\n'
'Errors: {}').format(
[e if isinstance(e, Exception) else None
for e in expected])
> raise AssertionError(
f'{problem}\n'
f'Expected: {_CallList(calls)}'
f'{self._calls_repr(prefix="Actual").rstrip(".")}'
) from cause
E AssertionError: Calls not found.
E Expected: [call(method='POST', path='/projects/test-project-123/jobs', data={'jobReference': {'jobId': 'job-id', 'projectId': 'test-project-123', 'location': 'EU'}}, timeout=None),
E call(method='POST', path='/projects/test-project-123/jobs', data={'jobReference': {'jobId': 'job-id', 'projectId': 'test-project-123', 'location': 'EU'}}, timeout=None),
E call(method='GET', path='/projects/test-project-123/jobs/job-id', query_params={'projection': 'full', 'location': 'EU'}, timeout=128),
E call(method='GET', path='/projects/test-project-123/jobs/job-id', query_params={'projection': 'full', 'location': 'EU'}, timeout=128)]
E Actual: [call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='POST', path='/projects/test-project-123/jobs', data={'jobReference': {'jobId': 'job-id', 'projectId': 'test-project-123', 'location': 'EU'}}, timeout=None),
E call(method='POST', path='/projects/test-project-123/jobs', data={'jobReference': {'jobId': 'job-id', 'projectId': 'test-project-123', 'location': 'EU'}}, timeout=None),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/test-project-123/jobs/job-id', query_params={'projection': 'full', 'location': 'EU'}, timeout=128),
E call(method='GET', path='/projects/test-project-123/jobs/job-id', query_params={'projection': 'full', 'location': 'EU'}, timeout=128)]

/usr/local/lib/python3.12/unittest/mock.py:981: AssertionError
_______________ TestQueryJob.test_result_begin_job_if_not_exist ________________

self =

def test_result_begin_job_if_not_exist(self):
begun_resource = self._make_resource()
query_running_resource = {
"jobComplete": True,
"jobReference": {
"projectId": self.PROJECT,
"jobId": self.JOB_ID,
"location": "US",
},
"schema": {"fields": [{"name": "col1", "type": "STRING"}]},
"status": {"state": "RUNNING"},
}
query_done_resource = {
"jobComplete": True,
"jobReference": {
"projectId": self.PROJECT,
"jobId": self.JOB_ID,
"location": "US",
},
"schema": {"fields": [{"name": "col1", "type": "STRING"}]},
"status": {"state": "DONE"},
}
done_resource = copy.deepcopy(begun_resource)
done_resource["status"] = {"state": "DONE"}
connection = make_connection(
begun_resource,
query_running_resource,
query_done_resource,
done_resource,
)
client = _make_client(project=self.PROJECT, connection=connection)
job = self._make_one(self.JOB_ID, self.QUERY, client)
job._properties["jobReference"]["location"] = "US"

job.result()

create_job_call = mock.call(
method="POST",
path=f"/projects/{self.PROJECT}/jobs",
data={
"jobReference": {
"jobId": self.JOB_ID,
"projectId": self.PROJECT,
"location": "US",
},
"configuration": {
"query": {"useLegacySql": False, "query": self.QUERY},
},
},
timeout=None,
)
reload_call = mock.call(
method="GET",
path=f"/projects/{self.PROJECT}/jobs/{self.JOB_ID}",
query_params={"projection": "full", "location": "US"},
timeout=DEFAULT_GET_JOB_TIMEOUT,
)
get_query_results_call = mock.call(
method="GET",
path=f"/projects/{self.PROJECT}/queries/{self.JOB_ID}",
query_params={
"maxResults": 0,
"location": "US",
},
timeout=None,
)

> connection.api_request.assert_has_calls(
[
# Make sure we start a job that hasn't started yet. See:
# https://github.com/googleapis/python-bigquery/issues/1940
create_job_call,
reload_call,
get_query_results_call,
reload_call,
]
)

tests/unit/job/test_query.py:1109:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self =
calls = [call(method='POST', path='/projects/project/jobs', data={'jobReference': {'jobId': 'JOB_ID', 'projectId': 'project', ...ethod='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'US'}, timeout=128)]
any_order = False

def assert_has_calls(self, calls, any_order=False):
"""assert the mock has been called with the specified calls.
The `mock_calls` list is checked for the calls.

If `any_order` is False (the default) then the calls must be
sequential. There can be extra calls before or after the
specified calls.

If `any_order` is True then the calls can be in any order, but
they must all appear in `mock_calls`."""
expected = [self._call_matcher(c) for c in calls]
cause = next((e for e in expected if isinstance(e, Exception)), None)
all_calls = _CallList(self._call_matcher(c) for c in self.mock_calls)
if not any_order:
if expected not in all_calls:
if cause is None:
problem = 'Calls not found.'
else:
problem = ('Error processing expected calls.\n'
'Errors: {}').format(
[e if isinstance(e, Exception) else None
for e in expected])
> raise AssertionError(
f'{problem}\n'
f'Expected: {_CallList(calls)}'
f'{self._calls_repr(prefix="Actual").rstrip(".")}'
) from cause
E AssertionError: Calls not found.
E Expected: [call(method='POST', path='/projects/project/jobs', data={'jobReference': {'jobId': 'JOB_ID', 'projectId': 'project', 'location': 'US'}, 'configuration': {'query': {'useLegacySql': False, 'query': 'select count(*) from persons'}}}, timeout=None),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'US'}, timeout=128),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'US'}, timeout=None),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'US'}, timeout=128)]
E Actual: [call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='POST', path='/projects/project/jobs', data={'jobReference': {'jobId': 'JOB_ID', 'projectId': 'project', 'location': 'US'}, 'configuration': {'query': {'useLegacySql': False, 'query': 'select count(*) from persons'}}}, timeout=None),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'US'}, timeout=128),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'US'}, timeout=None),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'US'}, timeout=128)]

/usr/local/lib/python3.12/unittest/mock.py:981: AssertionError
____________ TestQueryJob.test_result_reloads_job_state_until_done _____________

self =

def test_result_reloads_job_state_until_done(self):
"""Verify that result() doesn't return until state == 'DONE'.

This test verifies correctness for a possible sequence of API responses
that might cause internal customer issue b/332850329.
"""
from google.cloud.bigquery.table import RowIterator

query_resource = {
"jobComplete": False,
"jobReference": {
"projectId": self.PROJECT,
"jobId": self.JOB_ID,
"location": "EU",
},
}
query_resource_done = {
"jobComplete": True,
"jobReference": {
"projectId": self.PROJECT,
"jobId": self.JOB_ID,
"location": "EU",
},
"schema": {"fields": [{"name": "col1", "type": "STRING"}]},
"totalRows": "2",
"queryId": "abc-def",
}
job_resource = self._make_resource(started=True, location="EU")
job_resource_done = self._make_resource(started=True, ended=True, location="EU")
job_resource_done["configuration"]["query"]["destinationTable"] = {
"projectId": "dest-project",
"datasetId": "dest_dataset",
"tableId": "dest_table",
}
query_page_resource = {
# Explicitly set totalRows to be different from the initial
# response to test update during iteration.
"totalRows": "1",
"pageToken": None,
"rows": [{"f": [{"v": "abc"}]}],
}
conn = make_connection(
# QueryJob.result() makes a pair of jobs.get & jobs.getQueryResults
# REST API calls each iteration to determine if the job has finished
# or not.
#
# jobs.get (https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs/get)
# is necessary to make sure the job has really finished via
# `Job.status.state == "DONE"` and to get necessary properties for
# `RowIterator` like the destination table.
#
# jobs.getQueryResults
# (https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs/getQueryResults)
# with maxResults == 0 is technically optional,
# but it hangs up to 10 seconds until the job has finished. This
# makes sure we can know when the query has finished as close as
# possible to when the query finishes. It also gets properties
# necessary for `RowIterator` that isn't available on the job
# resource such as the schema
# (https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs/getQueryResults#body.GetQueryResultsResponse.FIELDS.schema)
# of the results.
job_resource,
query_resource,
# The query wasn't finished in the last call to jobs.get, so try
# again with a call to both jobs.get & jobs.getQueryResults.
job_resource,
query_resource_done,
# Even though, the previous jobs.getQueryResults response says
# the job is complete, we haven't downloaded the full job status
# yet.
#
# Important: per internal issue 332850329, this reponse has
# `Job.status.state = "RUNNING"`. This ensures we are protected
# against possible eventual consistency issues where
# `jobs.getQueryResults` says jobComplete == True, but our next
# call to `jobs.get` still doesn't have
# `Job.status.state == "DONE"`.
job_resource,
# Try again until `Job.status.state == "DONE"`.
#
# Note: the call to `jobs.getQueryResults` is missing here as
# an optimization. We already received a "completed" response, so
# we won't learn anything new by calling that API again.
job_resource,
job_resource_done,
# When we iterate over the `RowIterator` we return from
# `QueryJob.result()`, we make additional calls to
# `jobs.getQueryResults` but this time allowing the actual rows
# to be returned as well.
query_page_resource,
)
client = _make_client(self.PROJECT, connection=conn)
job = self._get_target_class().from_api_repr(job_resource, client)

result = job.result()

self.assertIsInstance(result, RowIterator)
self.assertEqual(result.total_rows, 2)
rows = list(result)
self.assertEqual(len(rows), 1)
self.assertEqual(rows[0].col1, "abc")
self.assertEqual(result.job_id, self.JOB_ID)
self.assertEqual(result.location, "EU")
self.assertEqual(result.project, self.PROJECT)
self.assertEqual(result.query_id, "abc-def")
# Test that the total_rows property has changed during iteration, based
# on the response from tabledata.list.
self.assertEqual(result.total_rows, 1)

query_results_path = f"/projects/{self.PROJECT}/queries/{self.JOB_ID}"
query_results_call = mock.call(
method="GET",
path=query_results_path,
query_params={"maxResults": 0, "location": "EU"},
timeout=None,
)
reload_call = mock.call(
method="GET",
path=f"/projects/{self.PROJECT}/jobs/{self.JOB_ID}",
query_params={"projection": "full", "location": "EU"},
timeout=DEFAULT_GET_JOB_TIMEOUT,
)
query_page_call = mock.call(
method="GET",
path=query_results_path,
query_params={
"fields": _LIST_ROWS_FROM_QUERY_RESULTS_FIELDS,
"location": "EU",
"formatOptions.useInt64Timestamp": True,
},
timeout=None,
)
# Ensure that we actually made the expected API calls in the sequence
# we thought above at the make_connection() call above.
#
# Note: The responses from jobs.get and jobs.getQueryResults can be
# deceptively similar, so this check ensures we actually made the
# requests we expected.
> conn.api_request.assert_has_calls(
[
# jobs.get & jobs.getQueryResults because the job just started.
reload_call,
query_results_call,
# jobs.get & jobs.getQueryResults because the query is still
# running.
reload_call,
query_results_call,
# We got a jobComplete response from the most recent call to
# jobs.getQueryResults, so now call jobs.get until we get
# `Jobs.status.state == "DONE"`. This tests a fix for internal
# issue b/332850329.
reload_call,
reload_call,
reload_call,
# jobs.getQueryResults without `maxResults` set to download
# the rows as we iterate over the `RowIterator`.
query_page_call,
]
)

tests/unit/job/test_query.py:999:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self =
calls = [call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'EU'}, timeo...='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'EU'}, timeout=128), ...]
any_order = False

def assert_has_calls(self, calls, any_order=False):
"""assert the mock has been called with the specified calls.
The `mock_calls` list is checked for the calls.

If `any_order` is False (the default) then the calls must be
sequential. There can be extra calls before or after the
specified calls.

If `any_order` is True then the calls can be in any order, but
they must all appear in `mock_calls`."""
expected = [self._call_matcher(c) for c in calls]
cause = next((e for e in expected if isinstance(e, Exception)), None)
all_calls = _CallList(self._call_matcher(c) for c in self.mock_calls)
if not any_order:
if expected not in all_calls:
if cause is None:
problem = 'Calls not found.'
else:
problem = ('Error processing expected calls.\n'
'Errors: {}').format(
[e if isinstance(e, Exception) else None
for e in expected])
> raise AssertionError(
f'{problem}\n'
f'Expected: {_CallList(calls)}'
f'{self._calls_repr(prefix="Actual").rstrip(".")}'
) from cause
E AssertionError: Calls not found.
E Expected: [call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'EU'}, timeout=128),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'EU'}, timeout=None),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'EU'}, timeout=128),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'EU'}, timeout=None),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'EU'}, timeout=128),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'EU'}, timeout=128),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'EU'}, timeout=128),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'fields': 'jobReference,totalRows,pageToken,rows', 'location': 'EU', 'formatOptions.useInt64Timestamp': True}, timeout=None)]
E Actual: [call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'EU'}, timeout=128),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'EU'}, timeout=None),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'EU'}, timeout=128),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'EU'}, timeout=None),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'EU'}, timeout=128),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'EU'}, timeout=128),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'EU'}, timeout=128),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(timeout=None, method='GET', path='/projects/project/queries/JOB_ID', query_params={'fields': 'jobReference,totalRows,pageToken,rows', 'location': 'EU', 'formatOptions.useInt64Timestamp': True})]

/usr/local/lib/python3.12/unittest/mock.py:981: AssertionError
___________________ TestQueryJob.test_result_w_custom_retry ____________________

self =

def test_result_w_custom_retry(self):
from google.cloud.bigquery.table import RowIterator

query_resource = {
"jobComplete": False,
"jobReference": {"projectId": self.PROJECT, "jobId": self.JOB_ID},
}
query_resource_done = {
"jobComplete": True,
"jobReference": {"projectId": self.PROJECT, "jobId": self.JOB_ID},
"schema": {"fields": [{"name": "col1", "type": "STRING"}]},
"totalRows": "2",
}
job_resource = self._make_resource(started=True, location="asia-northeast1")
job_resource_done = self._make_resource(
started=True, ended=True, location="asia-northeast1"
)
job_resource_done["configuration"]["query"]["destinationTable"] = {
"projectId": "dest-project",
"datasetId": "dest_dataset",
"tableId": "dest_table",
}

connection = make_connection(
# Also, for each API request, raise an exception that we know can
# be retried. Because of this, for each iteration we do:
# jobs.get (x2) & jobs.getQueryResults (x2)
exceptions.NotFound("not normally retriable"),
job_resource,
exceptions.NotFound("not normally retriable"),
query_resource,
# Query still not done, repeat both.
exceptions.NotFound("not normally retriable"),
job_resource,
exceptions.NotFound("not normally retriable"),
query_resource,
exceptions.NotFound("not normally retriable"),
# Query still not done, repeat both.
job_resource_done,
exceptions.NotFound("not normally retriable"),
query_resource_done,
# Query finished!
)
client = _make_client(self.PROJECT, connection=connection)
job = self._get_target_class().from_api_repr(job_resource, client)

custom_predicate = mock.Mock()
custom_predicate.return_value = True
custom_retry = google.api_core.retry.Retry(
initial=0.001,
maximum=0.001,
multiplier=1.0,
deadline=0.1,
predicate=custom_predicate,
)

self.assertIsInstance(job.result(retry=custom_retry), RowIterator)
query_results_call = mock.call(
method="GET",
path=f"/projects/{self.PROJECT}/queries/{self.JOB_ID}",
query_params={"maxResults": 0, "location": "asia-northeast1"},
# TODO(tswast): Why do we end up setting timeout to
# google.cloud.bigquery.client._MIN_GET_QUERY_RESULTS_TIMEOUT in
# some cases but not others?
timeout=mock.ANY,
)
reload_call = mock.call(
method="GET",
path=f"/projects/{self.PROJECT}/jobs/{self.JOB_ID}",
query_params={"projection": "full", "location": "asia-northeast1"},
timeout=DEFAULT_GET_JOB_TIMEOUT,
)

> connection.api_request.assert_has_calls(
[
# See make_connection() call above for explanation of the
# expected API calls.
#
# Query not done.
reload_call,
reload_call,
query_results_call,
query_results_call,
# Query still not done.
reload_call,
reload_call,
query_results_call,
query_results_call,
# Query done!
reload_call,
reload_call,
query_results_call,
query_results_call,
]
)

tests/unit/job/test_query.py:1400:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self =
calls = [call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'asia-northe...'/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'asia-northeast1'}, timeout=128), ...]
any_order = False

def assert_has_calls(self, calls, any_order=False):
"""assert the mock has been called with the specified calls.
The `mock_calls` list is checked for the calls.

If `any_order` is False (the default) then the calls must be
sequential. There can be extra calls before or after the
specified calls.

If `any_order` is True then the calls can be in any order, but
they must all appear in `mock_calls`."""
expected = [self._call_matcher(c) for c in calls]
cause = next((e for e in expected if isinstance(e, Exception)), None)
all_calls = _CallList(self._call_matcher(c) for c in self.mock_calls)
if not any_order:
if expected not in all_calls:
if cause is None:
problem = 'Calls not found.'
else:
problem = ('Error processing expected calls.\n'
'Errors: {}').format(
[e if isinstance(e, Exception) else None
for e in expected])
> raise AssertionError(
f'{problem}\n'
f'Expected: {_CallList(calls)}'
f'{self._calls_repr(prefix="Actual").rstrip(".")}'
) from cause
E AssertionError: Calls not found.
E Expected: [call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'asia-northeast1'}, timeout=128),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'asia-northeast1'}, timeout=128),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'asia-northeast1'}, timeout=),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'asia-northeast1'}, timeout=),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'asia-northeast1'}, timeout=128),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'asia-northeast1'}, timeout=128),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'asia-northeast1'}, timeout=),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'asia-northeast1'}, timeout=),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'asia-northeast1'}, timeout=128),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'asia-northeast1'}, timeout=128),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'asia-northeast1'}, timeout=),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'asia-northeast1'}, timeout=)]
E Actual: [call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'asia-northeast1'}, timeout=128),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'asia-northeast1'}, timeout=128),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'asia-northeast1'}, timeout=None),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'asia-northeast1'}, timeout=None),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'asia-northeast1'}, timeout=128),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'asia-northeast1'}, timeout=128),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'asia-northeast1'}, timeout=None),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'asia-northeast1'}, timeout=None),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'asia-northeast1'}, timeout=128),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'asia-northeast1'}, timeout=128),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'asia-northeast1'}, timeout=None),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'asia-northeast1'}, timeout=None)]

/usr/local/lib/python3.12/unittest/mock.py:981: AssertionError
_____________________ TestQueryJob.test_result_w_page_size _____________________

self =

def test_result_w_page_size(self):
# Arrange
query_results_resource = {
"jobComplete": True,
"jobReference": {"projectId": self.PROJECT, "jobId": self.JOB_ID},
"schema": {"fields": [{"name": "col1", "type": "STRING"}]},
"totalRows": "10",
"rows": [
{"f": [{"v": "row1"}]},
{"f": [{"v": "row2"}]},
{"f": [{"v": "row3"}]},
{"f": [{"v": "row4"}]},
{"f": [{"v": "row5"}]},
{"f": [{"v": "row6"}]},
{"f": [{"v": "row7"}]},
{"f": [{"v": "row8"}]},
{"f": [{"v": "row9"}]},
],
"pageToken": "first-page-token",
}
job_resource_running = self._make_resource(
started=True, ended=False, location="US"
)
job_resource_done = self._make_resource(started=True, ended=True, location="US")
destination_table = {
"projectId": self.PROJECT,
"datasetId": self.DS_ID,
"tableId": self.TABLE_ID,
}
q_config = job_resource_done["configuration"]["query"]
q_config["destinationTable"] = destination_table
query_page_resource_2 = {"totalRows": 10, "rows": [{"f": [{"v": "row10"}]}]}
conn = make_connection(
job_resource_running,
query_results_resource,
job_resource_done,
query_page_resource_2,
)
client = _make_client(self.PROJECT, connection=conn)
job = self._get_target_class().from_api_repr(job_resource_running, client)

# Act
result = job.result(page_size=9)

# Assert
actual_rows = list(result)
self.assertEqual(len(actual_rows), 10)

jobs_get_path = f"/projects/{self.PROJECT}/jobs/{self.JOB_ID}"
jobs_get_call = mock.call(
method="GET",
path=jobs_get_path,
query_params={"projection": "full", "location": "US"},
timeout=DEFAULT_GET_JOB_TIMEOUT,
)
query_results_path = f"/projects/{self.PROJECT}/queries/{self.JOB_ID}"
query_page_waiting_call = mock.call(
method="GET",
path=query_results_path,
query_params={
# Waiting for the results should set maxResults and cache the
# first page if page_size is set. This allows customers to
# more finely tune when we fallback to the BQ Storage API.
# See internal issue: 344008814.
"maxResults": 9,
"location": "US",
"formatOptions.useInt64Timestamp": True,
},
timeout=None,
)
query_page_2_call = mock.call(
timeout=None,
method="GET",
path=query_results_path,
query_params={
"pageToken": "first-page-token",
"maxResults": 9,
"fields": _LIST_ROWS_FROM_QUERY_RESULTS_FIELDS,
"location": "US",
"formatOptions.useInt64Timestamp": True,
},
)
> conn.api_request.assert_has_calls(
[jobs_get_call, query_page_waiting_call, jobs_get_call, query_page_2_call]
)

tests/unit/job/test_query.py:1625:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self =
calls = [call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'US'}, timeo...ts': 9, 'fields': 'jobReference,totalRows,pageToken,rows', 'location': 'US', 'formatOptions.useInt64Timestamp': True})]
any_order = False

def assert_has_calls(self, calls, any_order=False):
"""assert the mock has been called with the specified calls.
The `mock_calls` list is checked for the calls.

If `any_order` is False (the default) then the calls must be
sequential. There can be extra calls before or after the
specified calls.

If `any_order` is True then the calls can be in any order, but
they must all appear in `mock_calls`."""
expected = [self._call_matcher(c) for c in calls]
cause = next((e for e in expected if isinstance(e, Exception)), None)
all_calls = _CallList(self._call_matcher(c) for c in self.mock_calls)
if not any_order:
if expected not in all_calls:
if cause is None:
problem = 'Calls not found.'
else:
problem = ('Error processing expected calls.\n'
'Errors: {}').format(
[e if isinstance(e, Exception) else None
for e in expected])
> raise AssertionError(
f'{problem}\n'
f'Expected: {_CallList(calls)}'
f'{self._calls_repr(prefix="Actual").rstrip(".")}'
) from cause
E AssertionError: Calls not found.
E Expected: [call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'US'}, timeout=128),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 9, 'location': 'US', 'formatOptions.useInt64Timestamp': True}, timeout=None),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'US'}, timeout=128),
E call(timeout=None, method='GET', path='/projects/project/queries/JOB_ID', query_params={'pageToken': 'first-page-token', 'maxResults': 9, 'fields': 'jobReference,totalRows,pageToken,rows', 'location': 'US', 'formatOptions.useInt64Timestamp': True})]
E Actual: [call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'US'}, timeout=128),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 9, 'formatOptions.useInt64Timestamp': True, 'location': 'US'}, timeout=None),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'US'}, timeout=128),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(timeout=None, method='GET', path='/projects/project/queries/JOB_ID', query_params={'pageToken': 'first-page-token', 'maxResults': 9, 'fields': 'jobReference,totalRows,pageToken,rows', 'location': 'US', 'formatOptions.useInt64Timestamp': True})]

/usr/local/lib/python3.12/unittest/mock.py:981: AssertionError
_______________ TestQueryJob.test_result_w_timeout_doesnt_raise ________________

self =

def test_result_w_timeout_doesnt_raise(self):
import google.cloud.bigquery.client

begun_resource = self._make_resource()
query_resource = {
"jobComplete": True,
"jobReference": {"projectId": self.PROJECT, "jobId": self.JOB_ID},
"schema": {"fields": [{"name": "col1", "type": "STRING"}]},
}
done_resource = copy.deepcopy(begun_resource)
done_resource["status"] = {"state": "DONE"}
connection = make_connection(begun_resource, query_resource, done_resource)
client = _make_client(project=self.PROJECT, connection=connection)
job = self._make_one(self.JOB_ID, self.QUERY, client)
job._properties["jobReference"]["location"] = "US"
job._properties["status"] = {"state": "RUNNING"}

with freezegun.freeze_time("1970-01-01 00:00:00", tick=False):
job.result(
# Test that fractional seconds are supported, but use a timeout
# that is representable as a floating point without rounding
# errors since it can be represented exactly in base 2. In this
# case 1.125 is 9 / 8, which is a fraction with a power of 2 in
# the denominator.
timeout=1.125,
)

reload_call = mock.call(
method="GET",
path=f"/projects/{self.PROJECT}/jobs/{self.JOB_ID}",
query_params={"projection": "full", "location": "US"},
timeout=1.125,
)
get_query_results_call = mock.call(
method="GET",
path=f"/projects/{self.PROJECT}/queries/{self.JOB_ID}",
query_params={
"maxResults": 0,
"location": "US",
},
timeout=google.cloud.bigquery.client._MIN_GET_QUERY_RESULTS_TIMEOUT,
)
> connection.api_request.assert_has_calls(
[
reload_call,
get_query_results_call,
reload_call,
]
)

tests/unit/job/test_query.py:1489:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self =
calls = [call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'US'}, timeo...hod='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'US'}, timeout=1.125)]
any_order = False

def assert_has_calls(self, calls, any_order=False):
"""assert the mock has been called with the specified calls.
The `mock_calls` list is checked for the calls.

If `any_order` is False (the default) then the calls must be
sequential. There can be extra calls before or after the
specified calls.

If `any_order` is True then the calls can be in any order, but
they must all appear in `mock_calls`."""
expected = [self._call_matcher(c) for c in calls]
cause = next((e for e in expected if isinstance(e, Exception)), None)
all_calls = _CallList(self._call_matcher(c) for c in self.mock_calls)
if not any_order:
if expected not in all_calls:
if cause is None:
problem = 'Calls not found.'
else:
problem = ('Error processing expected calls.\n'
'Errors: {}').format(
[e if isinstance(e, Exception) else None
for e in expected])
> raise AssertionError(
f'{problem}\n'
f'Expected: {_CallList(calls)}'
f'{self._calls_repr(prefix="Actual").rstrip(".")}'
) from cause
E AssertionError: Calls not found.
E Expected: [call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'US'}, timeout=1.125),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'US'}, timeout=120),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'US'}, timeout=1.125)]
E Actual: [call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'US'}, timeout=1.125),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'US'}, timeout=120),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'US'}, timeout=1.125)]

/usr/local/lib/python3.12/unittest/mock.py:981: AssertionError
_____ TestQueryJob.test_result_w_timeout_raises_concurrent_futures_timeout _____

self =

def test_result_w_timeout_raises_concurrent_futures_timeout(self):
import google.cloud.bigquery.client

begun_resource = self._make_resource()
begun_resource["jobReference"]["location"] = "US"
query_resource = {
"jobComplete": True,
"jobReference": {"projectId": self.PROJECT, "jobId": self.JOB_ID},
"schema": {"fields": [{"name": "col1", "type": "STRING"}]},
}
done_resource = copy.deepcopy(begun_resource)
done_resource["status"] = {"state": "DONE"}
connection = make_connection(begun_resource, query_resource, done_resource)
client = _make_client(project=self.PROJECT, connection=connection)
job = self._make_one(self.JOB_ID, self.QUERY, client)
job._properties["jobReference"]["location"] = "US"
job._properties["status"] = {"state": "RUNNING"}

with freezegun.freeze_time(
"1970-01-01 00:00:00", auto_tick_seconds=1.0
), self.assertRaises(concurrent.futures.TimeoutError):
job.result(timeout=1.125)

reload_call = mock.call(
method="GET",
path=f"/projects/{self.PROJECT}/jobs/{self.JOB_ID}",
query_params={"projection": "full", "location": "US"},
timeout=1.125,
)
get_query_results_call = mock.call(
method="GET",
path=f"/projects/{self.PROJECT}/queries/{self.JOB_ID}",
query_params={
"maxResults": 0,
"location": "US",
},
timeout=google.cloud.bigquery.client._MIN_GET_QUERY_RESULTS_TIMEOUT,
)
> connection.api_request.assert_has_calls(
[
reload_call,
get_query_results_call,
# Timeout before we can reload with the final job state.
]
)

tests/unit/job/test_query.py:1535:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self =
calls = [call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'US'}, timeo...(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'US'}, timeout=120)]
any_order = False

def assert_has_calls(self, calls, any_order=False):
"""assert the mock has been called with the specified calls.
The `mock_calls` list is checked for the calls.

If `any_order` is False (the default) then the calls must be
sequential. There can be extra calls before or after the
specified calls.

If `any_order` is True then the calls can be in any order, but
they must all appear in `mock_calls`."""
expected = [self._call_matcher(c) for c in calls]
cause = next((e for e in expected if isinstance(e, Exception)), None)
all_calls = _CallList(self._call_matcher(c) for c in self.mock_calls)
if not any_order:
if expected not in all_calls:
if cause is None:
problem = 'Calls not found.'
else:
problem = ('Error processing expected calls.\n'
'Errors: {}').format(
[e if isinstance(e, Exception) else None
for e in expected])
> raise AssertionError(
f'{problem}\n'
f'Expected: {_CallList(calls)}'
f'{self._calls_repr(prefix="Actual").rstrip(".")}'
) from cause
E AssertionError: Calls not found.
E Expected: [call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'US'}, timeout=1.125),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'US'}, timeout=120)]
E Actual: [call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'US'}, timeout=1.125),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'US'}, timeout=120)]

/usr/local/lib/python3.12/unittest/mock.py:981: AssertionError
________ TestQueryJob.test_result_with_done_job_calls_get_query_results ________

self =

def test_result_with_done_job_calls_get_query_results(self):
query_resource_done = {
"jobComplete": True,
"jobReference": {"projectId": self.PROJECT, "jobId": self.JOB_ID},
"schema": {"fields": [{"name": "col1", "type": "STRING"}]},
"totalRows": "1",
}
job_resource = self._make_resource(started=True, ended=True, location="EU")
job_resource["configuration"]["query"]["destinationTable"] = {
"projectId": "dest-project",
"datasetId": "dest_dataset",
"tableId": "dest_table",
}
results_page_resource = {
"totalRows": "1",
"pageToken": None,
"rows": [{"f": [{"v": "abc"}]}],
}
conn = make_connection(query_resource_done, results_page_resource)
client = _make_client(self.PROJECT, connection=conn)
job = self._get_target_class().from_api_repr(job_resource, client)

result = job.result()

rows = list(result)
self.assertEqual(len(rows), 1)
self.assertEqual(rows[0].col1, "abc")

query_results_path = f"/projects/{self.PROJECT}/queries/{self.JOB_ID}"
query_results_call = mock.call(
method="GET",
path=query_results_path,
query_params={"maxResults": 0, "location": "EU"},
timeout=None,
)
query_results_page_call = mock.call(
method="GET",
path=query_results_path,
query_params={
"fields": _LIST_ROWS_FROM_QUERY_RESULTS_FIELDS,
"location": "EU",
"formatOptions.useInt64Timestamp": True,
},
timeout=None,
)
> conn.api_request.assert_has_calls([query_results_call, query_results_page_call])

tests/unit/job/test_query.py:1165:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self =
calls = [call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'EU'}, timeout...s': 'jobReference,totalRows,pageToken,rows', 'location': 'EU', 'formatOptions.useInt64Timestamp': True}, timeout=None)]
any_order = False

def assert_has_calls(self, calls, any_order=False):
"""assert the mock has been called with the specified calls.
The `mock_calls` list is checked for the calls.

If `any_order` is False (the default) then the calls must be
sequential. There can be extra calls before or after the
specified calls.

If `any_order` is True then the calls can be in any order, but
they must all appear in `mock_calls`."""
expected = [self._call_matcher(c) for c in calls]
cause = next((e for e in expected if isinstance(e, Exception)), None)
all_calls = _CallList(self._call_matcher(c) for c in self.mock_calls)
if not any_order:
if expected not in all_calls:
if cause is None:
problem = 'Calls not found.'
else:
problem = ('Error processing expected calls.\n'
'Errors: {}').format(
[e if isinstance(e, Exception) else None
for e in expected])
> raise AssertionError(
f'{problem}\n'
f'Expected: {_CallList(calls)}'
f'{self._calls_repr(prefix="Actual").rstrip(".")}'
) from cause
E AssertionError: Calls not found.
E Expected: [call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'EU'}, timeout=None),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'fields': 'jobReference,totalRows,pageToken,rows', 'location': 'EU', 'formatOptions.useInt64Timestamp': True}, timeout=None)]
E Actual: [call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 0, 'location': 'EU'}, timeout=None),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(timeout=None, method='GET', path='/projects/project/queries/JOB_ID', query_params={'fields': 'jobReference,totalRows,pageToken,rows', 'location': 'EU', 'formatOptions.useInt64Timestamp': True})]

/usr/local/lib/python3.12/unittest/mock.py:981: AssertionError
__________________ TestQueryJob.test_result_with_max_results ___________________

self =

def test_result_with_max_results(self):
from google.cloud.bigquery.table import RowIterator

query_resource = {
"jobComplete": True,
"jobReference": {"projectId": self.PROJECT, "jobId": self.JOB_ID},
"schema": {"fields": [{"name": "col1", "type": "STRING"}]},
"totalRows": "10",
"pageToken": "first-page-token",
"rows": [
{"f": [{"v": "abc"}]},
{"f": [{"v": "def"}]},
{"f": [{"v": "ghi"}]},
{"f": [{"v": "jkl"}]},
{"f": [{"v": "mno"}]},
{"f": [{"v": "pqr"}]},
# Pretend these are very large rows, so the API doesn't return
# all of the rows we asked for in the first response.
],
}
query_page_resource = {
"totalRows": "10",
"pageToken": None,
"rows": [
{"f": [{"v": "stu"}]},
{"f": [{"v": "vwx"}]},
{"f": [{"v": "yz0"}]},
],
}
job_resource_running = self._make_resource(
started=True, ended=False, location="US"
)
job_resource_done = self._make_resource(started=True, ended=True, location="US")
conn = make_connection(job_resource_done, query_resource, query_page_resource)
client = _make_client(self.PROJECT, connection=conn)
job = self._get_target_class().from_api_repr(job_resource_running, client)

max_results = 9
result = job.result(max_results=max_results)

self.assertIsInstance(result, RowIterator)
self.assertEqual(result.total_rows, 10)

rows = list(result)

self.assertEqual(len(rows), 9)
jobs_get_path = f"/projects/{self.PROJECT}/jobs/{self.JOB_ID}"
jobs_get_call = mock.call(
method="GET",
path=jobs_get_path,
query_params={"projection": "full", "location": "US"},
timeout=DEFAULT_GET_JOB_TIMEOUT,
)
query_results_path = f"/projects/{self.PROJECT}/queries/{self.JOB_ID}"
query_page_waiting_call = mock.call(
method="GET",
path=query_results_path,
query_params={
# Waiting for the results should set maxResults and cache the
# first page if page_size is set. This allows customers to
# more finely tune when we fallback to the BQ Storage API.
# See internal issue: 344008814.
"maxResults": max_results,
"formatOptions.useInt64Timestamp": True,
"location": "US",
},
timeout=None,
)
query_page_2_call = mock.call(
timeout=None,
method="GET",
path=query_results_path,
query_params={
"pageToken": "first-page-token",
"maxResults": 3,
"fields": _LIST_ROWS_FROM_QUERY_RESULTS_FIELDS,
"location": "US",
"formatOptions.useInt64Timestamp": True,
},
)
# Waiting for the results should set maxResults and cache the
# first page if max_results is set. This allows customers to
# more finely tune when we fallback to the BQ Storage API.
# See internal issue: 344008814.
> conn.api_request.assert_has_calls(
[jobs_get_call, query_page_waiting_call, query_page_2_call]
)

tests/unit/job/test_query.py:1323:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self =
calls = [call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'US'}, timeo...ts': 3, 'fields': 'jobReference,totalRows,pageToken,rows', 'location': 'US', 'formatOptions.useInt64Timestamp': True})]
any_order = False

def assert_has_calls(self, calls, any_order=False):
"""assert the mock has been called with the specified calls.
The `mock_calls` list is checked for the calls.

If `any_order` is False (the default) then the calls must be
sequential. There can be extra calls before or after the
specified calls.

If `any_order` is True then the calls can be in any order, but
they must all appear in `mock_calls`."""
expected = [self._call_matcher(c) for c in calls]
cause = next((e for e in expected if isinstance(e, Exception)), None)
all_calls = _CallList(self._call_matcher(c) for c in self.mock_calls)
if not any_order:
if expected not in all_calls:
if cause is None:
problem = 'Calls not found.'
else:
problem = ('Error processing expected calls.\n'
'Errors: {}').format(
[e if isinstance(e, Exception) else None
for e in expected])
> raise AssertionError(
f'{problem}\n'
f'Expected: {_CallList(calls)}'
f'{self._calls_repr(prefix="Actual").rstrip(".")}'
) from cause
E AssertionError: Calls not found.
E Expected: [call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'US'}, timeout=128),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 9, 'formatOptions.useInt64Timestamp': True, 'location': 'US'}, timeout=None),
E call(timeout=None, method='GET', path='/projects/project/queries/JOB_ID', query_params={'pageToken': 'first-page-token', 'maxResults': 3, 'fields': 'jobReference,totalRows,pageToken,rows', 'location': 'US', 'formatOptions.useInt64Timestamp': True})]
E Actual: [call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/jobs/JOB_ID', query_params={'projection': 'full', 'location': 'US'}, timeout=128),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/project/queries/JOB_ID', query_params={'maxResults': 9, 'formatOptions.useInt64Timestamp': True, 'location': 'US'}, timeout=None),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(timeout=None, method='GET', path='/projects/project/queries/JOB_ID', query_params={'pageToken': 'first-page-token', 'maxResults': 3, 'fields': 'jobReference,totalRows,pageToken,rows', 'location': 'US', 'formatOptions.useInt64Timestamp': True})]

/usr/local/lib/python3.12/unittest/mock.py:981: AssertionError
_____________ TestClient.test_create_routine_w_conflict_exists_ok ______________

self =

def test_create_routine_w_conflict_exists_ok(self):
from google.cloud.bigquery.routine import Routine

creds = _make_credentials()
client = self._make_one(project=self.PROJECT, credentials=creds)
resource = {
"routineReference": {
"projectId": "test-routine-project",
"datasetId": "test_routines",
"routineId": "minimal_routine",
}
}
path = "/projects/test-routine-project/datasets/test_routines/routines"

conn = client._connection = make_connection(
google.api_core.exceptions.AlreadyExists("routine already exists"), resource
)
full_routine_id = "test-routine-project.test_routines.minimal_routine"
routine = Routine(full_routine_id)
with mock.patch(
"google.cloud.bigquery.opentelemetry_tracing._get_final_span_attributes"
) as final_attributes:
actual_routine = client.create_routine(routine, exists_ok=True)

final_attributes.assert_called_with(
{"path": "%s/minimal_routine" % path}, client, None
)

self.assertEqual(actual_routine.project, "test-routine-project")
self.assertEqual(actual_routine.dataset_id, "test_routines")
self.assertEqual(actual_routine.routine_id, "minimal_routine")
> conn.api_request.assert_has_calls(
[
mock.call(
method="POST",
path=path,
data=resource,
timeout=DEFAULT_TIMEOUT,
),
mock.call(
method="GET",
path="/projects/test-routine-project/datasets/test_routines/routines/minimal_routine",
timeout=DEFAULT_TIMEOUT,
),
]
)

tests/unit/test_client.py:1058:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self =
calls = [call(method='POST', path='/projects/test-routine-project/datasets/test_routines/routines', data={'routineReference': ...all(method='GET', path='/projects/test-routine-project/datasets/test_routines/routines/minimal_routine', timeout=None)]
any_order = False

def assert_has_calls(self, calls, any_order=False):
"""assert the mock has been called with the specified calls.
The `mock_calls` list is checked for the calls.

If `any_order` is False (the default) then the calls must be
sequential. There can be extra calls before or after the
specified calls.

If `any_order` is True then the calls can be in any order, but
they must all appear in `mock_calls`."""
expected = [self._call_matcher(c) for c in calls]
cause = next((e for e in expected if isinstance(e, Exception)), None)
all_calls = _CallList(self._call_matcher(c) for c in self.mock_calls)
if not any_order:
if expected not in all_calls:
if cause is None:
problem = 'Calls not found.'
else:
problem = ('Error processing expected calls.\n'
'Errors: {}').format(
[e if isinstance(e, Exception) else None
for e in expected])
> raise AssertionError(
f'{problem}\n'
f'Expected: {_CallList(calls)}'
f'{self._calls_repr(prefix="Actual").rstrip(".")}'
) from cause
E AssertionError: Calls not found.
E Expected: [call(method='POST', path='/projects/test-routine-project/datasets/test_routines/routines', data={'routineReference': {'projectId': 'test-routine-project', 'datasetId': 'test_routines', 'routineId': 'minimal_routine'}}, timeout=None),
E call(method='GET', path='/projects/test-routine-project/datasets/test_routines/routines/minimal_routine', timeout=None)]
E Actual: [call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='POST', path='/projects/test-routine-project/datasets/test_routines/routines', data={'routineReference': {'projectId': 'test-routine-project', 'datasetId': 'test_routines', 'routineId': 'minimal_routine'}}, timeout=None),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/test-routine-project/datasets/test_routines/routines/minimal_routine', timeout=None)]

/usr/local/lib/python3.12/unittest/mock.py:981: AssertionError
_________ TestClient.test_create_table_alreadyexists_w_exists_ok_true __________

self =

def test_create_table_alreadyexists_w_exists_ok_true(self):
post_path = "/projects/{}/datasets/{}/tables".format(self.PROJECT, self.DS_ID)
get_path = "/projects/{}/datasets/{}/tables/{}".format(
self.PROJECT, self.DS_ID, self.TABLE_ID
)
resource = self._make_table_resource()
creds = _make_credentials()
client = self._make_one(
project=self.PROJECT, credentials=creds, location=self.LOCATION
)
conn = client._connection = make_connection(
google.api_core.exceptions.AlreadyExists("table already exists"), resource
)

with mock.patch(
"google.cloud.bigquery.opentelemetry_tracing._get_final_span_attributes"
) as final_attributes:
got = client.create_table(
"{}.{}".format(self.DS_ID, self.TABLE_ID), exists_ok=True
)

final_attributes.assert_called_with({"path": get_path}, client, None)

self.assertEqual(got.project, self.PROJECT)
self.assertEqual(got.dataset_id, self.DS_ID)
self.assertEqual(got.table_id, self.TABLE_ID)

> conn.api_request.assert_has_calls(
[
mock.call(
method="POST",
path=post_path,
data={
"tableReference": {
"projectId": self.PROJECT,
"datasetId": self.DS_ID,
"tableId": self.TABLE_ID,
},
"labels": {},
},
timeout=DEFAULT_TIMEOUT,
),
mock.call(method="GET", path=get_path, timeout=DEFAULT_TIMEOUT),
]
)

tests/unit/test_client.py:1510:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self =
calls = [call(method='POST', path='/projects/PROJECT/datasets/DATASET_ID/tables', data={'tableReference': {'projectId': 'PROJE...s': {}}, timeout=None), call(method='GET', path='/projects/PROJECT/datasets/DATASET_ID/tables/TABLE_ID', timeout=None)]
any_order = False

def assert_has_calls(self, calls, any_order=False):
"""assert the mock has been called with the specified calls.
The `mock_calls` list is checked for the calls.

If `any_order` is False (the default) then the calls must be
sequential. There can be extra calls before or after the
specified calls.

If `any_order` is True then the calls can be in any order, but
they must all appear in `mock_calls`."""
expected = [self._call_matcher(c) for c in calls]
cause = next((e for e in expected if isinstance(e, Exception)), None)
all_calls = _CallList(self._call_matcher(c) for c in self.mock_calls)
if not any_order:
if expected not in all_calls:
if cause is None:
problem = 'Calls not found.'
else:
problem = ('Error processing expected calls.\n'
'Errors: {}').format(
[e if isinstance(e, Exception) else None
for e in expected])
> raise AssertionError(
f'{problem}\n'
f'Expected: {_CallList(calls)}'
f'{self._calls_repr(prefix="Actual").rstrip(".")}'
) from cause
E AssertionError: Calls not found.
E Expected: [call(method='POST', path='/projects/PROJECT/datasets/DATASET_ID/tables', data={'tableReference': {'projectId': 'PROJECT', 'datasetId': 'DATASET_ID', 'tableId': 'TABLE_ID'}, 'labels': {}}, timeout=None),
E call(method='GET', path='/projects/PROJECT/datasets/DATASET_ID/tables/TABLE_ID', timeout=None)]
E Actual: [call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='POST', path='/projects/PROJECT/datasets/DATASET_ID/tables', data={'tableReference': {'projectId': 'PROJECT', 'datasetId': 'DATASET_ID', 'tableId': 'TABLE_ID'}, 'labels': {}}, timeout=None),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/PROJECT/datasets/DATASET_ID/tables/TABLE_ID', timeout=None)]

/usr/local/lib/python3.12/unittest/mock.py:981: AssertionError
_____________ TestClient.test_list_rows_w_start_index_w_page_size ______________

self =

def test_list_rows_w_start_index_w_page_size(self):
from google.cloud.bigquery.schema import SchemaField
from google.cloud.bigquery.table import Table
from google.cloud.bigquery.table import Row

PATH = "projects/%s/datasets/%s/tables/%s/data" % (
self.PROJECT,
self.DS_ID,
self.TABLE_ID,
)

page_1 = {
"totalRows": 4,
"pageToken": "some-page-token",
"rows": [
{"f": [{"v": "Phred Phlyntstone"}]},
{"f": [{"v": "Bharney Rhubble"}]},
],
}
page_2 = {
"totalRows": 4,
"rows": [
{"f": [{"v": "Wylma Phlyntstone"}]},
{"f": [{"v": "Bhettye Rhubble"}]},
],
}
creds = _make_credentials()
http = object()
client = self._make_one(project=self.PROJECT, credentials=creds, _http=http)
conn = client._connection = make_connection(page_1, page_2)
full_name = SchemaField("full_name", "STRING", mode="REQUIRED")
table = Table(self.TABLE_REF, schema=[full_name])
iterator = client.list_rows(table, max_results=4, page_size=2, start_index=1)
pages = iterator.pages
rows = list(next(pages))
extra_params = iterator.extra_params
f2i = {"full_name": 0}
self.assertEqual(len(rows), 2)
self.assertEqual(rows[0], Row(("Phred Phlyntstone",), f2i))
self.assertEqual(rows[1], Row(("Bharney Rhubble",), f2i))

rows = list(next(pages))

self.assertEqual(len(rows), 2)
self.assertEqual(rows[0], Row(("Wylma Phlyntstone",), f2i))
self.assertEqual(rows[1], Row(("Bhettye Rhubble",), f2i))
self.assertEqual(
extra_params, {"startIndex": 1, "formatOptions.useInt64Timestamp": True}
)

> conn.api_request.assert_has_calls(
[
mock.call(
method="GET",
path="/%s" % PATH,
query_params={
"startIndex": 1,
"maxResults": 2,
"formatOptions.useInt64Timestamp": True,
},
timeout=DEFAULT_TIMEOUT,
),
mock.call(
method="GET",
path="/%s" % PATH,
query_params={
"pageToken": "some-page-token",
"maxResults": 2,
"formatOptions.useInt64Timestamp": True,
},
timeout=DEFAULT_TIMEOUT,
),
]
)

tests/unit/test_client.py:6858:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self =
calls = [call(method='GET', path='/projects/PROJECT/datasets/DATASET_ID/tables/TABLE_ID/data', query_params={'startIndex': 1, ...query_params={'pageToken': 'some-page-token', 'maxResults': 2, 'formatOptions.useInt64Timestamp': True}, timeout=None)]
any_order = False

def assert_has_calls(self, calls, any_order=False):
"""assert the mock has been called with the specified calls.
The `mock_calls` list is checked for the calls.

If `any_order` is False (the default) then the calls must be
sequential. There can be extra calls before or after the
specified calls.

If `any_order` is True then the calls can be in any order, but
they must all appear in `mock_calls`."""
expected = [self._call_matcher(c) for c in calls]
cause = next((e for e in expected if isinstance(e, Exception)), None)
all_calls = _CallList(self._call_matcher(c) for c in self.mock_calls)
if not any_order:
if expected not in all_calls:
if cause is None:
problem = 'Calls not found.'
else:
problem = ('Error processing expected calls.\n'
'Errors: {}').format(
[e if isinstance(e, Exception) else None
for e in expected])
> raise AssertionError(
f'{problem}\n'
f'Expected: {_CallList(calls)}'
f'{self._calls_repr(prefix="Actual").rstrip(".")}'
) from cause
E AssertionError: Calls not found.
E Expected: [call(method='GET', path='/projects/PROJECT/datasets/DATASET_ID/tables/TABLE_ID/data', query_params={'startIndex': 1, 'maxResults': 2, 'formatOptions.useInt64Timestamp': True}, timeout=None),
E call(method='GET', path='/projects/PROJECT/datasets/DATASET_ID/tables/TABLE_ID/data', query_params={'pageToken': 'some-page-token', 'maxResults': 2, 'formatOptions.useInt64Timestamp': True}, timeout=None)]
E Actual: [call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(timeout=None, method='GET', path='/projects/PROJECT/datasets/DATASET_ID/tables/TABLE_ID/data', query_params={'maxResults': 2, 'startIndex': 1, 'formatOptions.useInt64Timestamp': True}),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(timeout=None, method='GET', path='/projects/PROJECT/datasets/DATASET_ID/tables/TABLE_ID/data', query_params={'pageToken': 'some-page-token', 'maxResults': 2, 'formatOptions.useInt64Timestamp': True})]

/usr/local/lib/python3.12/unittest/mock.py:981: AssertionError
_________ TestClient.test_query_and_wait_w_page_size_multiple_requests _________

self =

def test_query_and_wait_w_page_size_multiple_requests(self):
"""
For queries that last longer than the intial (about 10s) call to
jobs.query, we should still pass through the page size to the
subsequent calls to jobs.getQueryResults.

See internal issue 344008814.
"""
query = "select count(*) from `bigquery-public-data.usa_names.usa_1910_2013`"
job_reference = {
"projectId": "my-jobs-project",
"location": "my-jobs-location",
"jobId": "my-jobs-id",
}
jobs_query_response = {
"jobComplete": False,
"jobReference": job_reference,
}
jobs_get_response = {
"jobReference": job_reference,
"status": {"state": "DONE"},
}
get_query_results_response = {
"jobComplete": True,
}
creds = _make_credentials()
http = object()
client = self._make_one(project=self.PROJECT, credentials=creds, _http=http)
conn = client._connection = make_connection(
jobs_query_response,
jobs_get_response,
get_query_results_response,
)

_ = client.query_and_wait(query, page_size=11)

> conn.api_request.assert_has_calls(
[
# Verify the request we send is to jobs.query.
mock.call(
method="POST",
path=f"/projects/{self.PROJECT}/queries",
data={
"useLegacySql": False,
"query": query,
"formatOptions": {"useInt64Timestamp": True},
"maxResults": 11,
"requestId": mock.ANY,
},
timeout=None,
),
# jobs.get: Check if the job has finished.
mock.call(
method="GET",
path="/projects/my-jobs-project/jobs/my-jobs-id",
query_params={
"projection": "full",
"location": "my-jobs-location",
},
timeout=google.cloud.bigquery.retry.DEFAULT_GET_JOB_TIMEOUT,
),
# jobs.getQueryResults: wait for the query / fetch first page
mock.call(
method="GET",
path="/projects/my-jobs-project/queries/my-jobs-id",
query_params={
# We should still pass through the page size to the
# subsequent calls to jobs.getQueryResults.
#
# See internal issue 344008814.
"maxResults": 11,
"formatOptions.useInt64Timestamp": True,
"location": "my-jobs-location",
},
timeout=None,
),
]
)

tests/unit/test_client.py:5526:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self =
calls = [call(method='POST', path='/projects/PROJECT/queries', data={'useLegacySql': False, 'query': 'select count(*) from `bi...uery_params={'maxResults': 11, 'formatOptions.useInt64Timestamp': True, 'location': 'my-jobs-location'}, timeout=None)]
any_order = False

def assert_has_calls(self, calls, any_order=False):
"""assert the mock has been called with the specified calls.
The `mock_calls` list is checked for the calls.

If `any_order` is False (the default) then the calls must be
sequential. There can be extra calls before or after the
specified calls.

If `any_order` is True then the calls can be in any order, but
they must all appear in `mock_calls`."""
expected = [self._call_matcher(c) for c in calls]
cause = next((e for e in expected if isinstance(e, Exception)), None)
all_calls = _CallList(self._call_matcher(c) for c in self.mock_calls)
if not any_order:
if expected not in all_calls:
if cause is None:
problem = 'Calls not found.'
else:
problem = ('Error processing expected calls.\n'
'Errors: {}').format(
[e if isinstance(e, Exception) else None
for e in expected])
> raise AssertionError(
f'{problem}\n'
f'Expected: {_CallList(calls)}'
f'{self._calls_repr(prefix="Actual").rstrip(".")}'
) from cause
E AssertionError: Calls not found.
E Expected: [call(method='POST', path='/projects/PROJECT/queries', data={'useLegacySql': False, 'query': 'select count(*) from `bigquery-public-data.usa_names.usa_1910_2013`', 'formatOptions': {'useInt64Timestamp': True}, 'maxResults': 11, 'requestId': }, timeout=None),
E call(method='GET', path='/projects/my-jobs-project/jobs/my-jobs-id', query_params={'projection': 'full', 'location': 'my-jobs-location'}, timeout=128),
E call(method='GET', path='/projects/my-jobs-project/queries/my-jobs-id', query_params={'maxResults': 11, 'formatOptions.useInt64Timestamp': True, 'location': 'my-jobs-location'}, timeout=None)]
E Actual: [call(method='POST', path='/projects/PROJECT/queries', data={'useLegacySql': False, 'formatOptions': {'useInt64Timestamp': True}, 'query': 'select count(*) from `bigquery-public-data.usa_names.usa_1910_2013`', 'maxResults': 11, 'requestId': 'a7354397-e26e-4405-855b-1ee4ad52d213'}, timeout=None),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/my-jobs-project/jobs/my-jobs-id', query_params={'projection': 'full', 'location': 'my-jobs-location'}, timeout=128),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/my-jobs-project/queries/my-jobs-id', query_params={'maxResults': 11, 'formatOptions.useInt64Timestamp': True, 'location': 'my-jobs-location'}, timeout=None)]

/usr/local/lib/python3.12/unittest/mock.py:981: AssertionError
______________ test_create_dataset_alreadyexists_w_exists_ok_true ______________

PROJECT = 'PROJECT', DS_ID = 'DATASET_ID', LOCATION = 'us-central'

def test_create_dataset_alreadyexists_w_exists_ok_true(PROJECT, DS_ID, LOCATION):
post_path = "/projects/{}/datasets".format(PROJECT)
get_path = "/projects/{}/datasets/{}".format(PROJECT, DS_ID)
resource = {
"datasetReference": {"projectId": PROJECT, "datasetId": DS_ID},
"etag": "etag",
"id": "{}:{}".format(PROJECT, DS_ID),
"location": LOCATION,
}
client = make_client(location=LOCATION)
conn = client._connection = make_connection(
google.api_core.exceptions.AlreadyExists("dataset already exists"), resource
)
with mock.patch(
"google.cloud.bigquery.opentelemetry_tracing._get_final_span_attributes"
) as final_attributes:
dataset = client.create_dataset(DS_ID, exists_ok=True)

final_attributes.assert_called_with({"path": get_path}, client, None)

assert dataset.dataset_id == DS_ID
assert dataset.project == PROJECT
assert dataset.etag == resource["etag"]
assert dataset.full_dataset_id == resource["id"]
assert dataset.location == LOCATION

> conn.api_request.assert_has_calls(
[
mock.call(
method="POST",
path=post_path,
data={
"datasetReference": {"projectId": PROJECT, "datasetId": DS_ID},
"labels": {},
"location": LOCATION,
},
timeout=DEFAULT_TIMEOUT,
),
mock.call(method="GET", path=get_path, timeout=DEFAULT_TIMEOUT),
]
)

tests/unit/test_create_dataset.py:358:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self =
calls = [call(method='POST', path='/projects/PROJECT/datasets', data={'datasetReference': {'projectId': 'PROJECT', 'datasetId'...ocation': 'us-central'}, timeout=None), call(method='GET', path='/projects/PROJECT/datasets/DATASET_ID', timeout=None)]
any_order = False

def assert_has_calls(self, calls, any_order=False):
"""assert the mock has been called with the specified calls.
The `mock_calls` list is checked for the calls.

If `any_order` is False (the default) then the calls must be
sequential. There can be extra calls before or after the
specified calls.

If `any_order` is True then the calls can be in any order, but
they must all appear in `mock_calls`."""
expected = [self._call_matcher(c) for c in calls]
cause = next((e for e in expected if isinstance(e, Exception)), None)
all_calls = _CallList(self._call_matcher(c) for c in self.mock_calls)
if not any_order:
if expected not in all_calls:
if cause is None:
problem = 'Calls not found.'
else:
problem = ('Error processing expected calls.\n'
'Errors: {}').format(
[e if isinstance(e, Exception) else None
for e in expected])
> raise AssertionError(
f'{problem}\n'
f'Expected: {_CallList(calls)}'
f'{self._calls_repr(prefix="Actual").rstrip(".")}'
) from cause
E AssertionError: Calls not found.
E Expected: [call(method='POST', path='/projects/PROJECT/datasets', data={'datasetReference': {'projectId': 'PROJECT', 'datasetId': 'DATASET_ID'}, 'labels': {}, 'location': 'us-central'}, timeout=None),
E call(method='GET', path='/projects/PROJECT/datasets/DATASET_ID', timeout=None)]
E Actual: [call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='POST', path='/projects/PROJECT/datasets', data={'datasetReference': {'projectId': 'PROJECT', 'datasetId': 'DATASET_ID'}, 'labels': {}, 'location': 'us-central'}, timeout=None),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/PROJECT/datasets/DATASET_ID', timeout=None)]

/usr/local/lib/python3.12/unittest/mock.py:981: AssertionError
__ test_retry_connection_error_with_default_retries_and_successful_first_job ___

monkeypatch = <_pytest.monkeypatch.MonkeyPatch object at 0x7f0c42377c50>
client =

def test_retry_connection_error_with_default_retries_and_successful_first_job(
monkeypatch, client
):
"""
Make sure ConnectionError can be retried at `is_job_done` level, even if
retries are exhaused by API-level retry.

Note: Because restart_query_job is set to True only in the case of a
confirmed job failure, this should be safe to do even when a job is not
idempotent.

Regression test for issue
https://github.com/googleapis/python-bigquery/issues/1929
"""
job_counter = 0

def make_job_id(*args, **kwargs):
nonlocal job_counter
job_counter += 1
return f"{job_counter}"

monkeypatch.setattr(_job_helpers, "make_job_id", make_job_id)
conn = client._connection = make_connection()
project = client.project
job_reference_1 = {"projectId": project, "jobId": "1", "location": "test-loc"}
NUM_API_RETRIES = 2

with freezegun.freeze_time(
"2024-01-01 00:00:00",
# Note: because of exponential backoff and a bit of jitter,
# NUM_API_RETRIES will get less accurate the greater the value.
# We add 1 because we know there will be at least some additional
# calls to fetch the time / sleep before the retry deadline is hit.
auto_tick_seconds=(
google.cloud.bigquery.retry._DEFAULT_RETRY_DEADLINE / NUM_API_RETRIES
)
+ 1,
):
conn.api_request.side_effect = [
# jobs.insert
{"jobReference": job_reference_1, "status": {"state": "PENDING"}},
# jobs.get
{"jobReference": job_reference_1, "status": {"state": "RUNNING"}},
# jobs.getQueryResults x2
requests.exceptions.ConnectionError(),
requests.exceptions.ConnectionError(),
# jobs.get
# Job actually succeeeded, so we shouldn't be restarting the job,
# even though we are retrying at the `is_job_done` level.
{"jobReference": job_reference_1, "status": {"state": "DONE"}},
# jobs.getQueryResults
{"jobReference": job_reference_1, "jobComplete": True},
]

job = client.query("select 1")
rows_iter = job.result()

assert job.done() # Shouldn't make any additional API calls.
assert rows_iter is not None

# Should only have created one job, even though we did call job_retry.
assert job_counter == 1

# Double-check that we made the API calls we expected to make.
> conn.api_request.assert_has_calls(
[
# jobs.insert
mock.call(
method="POST",
path="/projects/PROJECT/jobs",
data={
"jobReference": {"jobId": "1", "projectId": "PROJECT"},
"configuration": {
"query": {"useLegacySql": False, "query": "select 1"}
},
},
timeout=None,
),
# jobs.get
mock.call(
method="GET",
path="/projects/PROJECT/jobs/1",
query_params={"location": "test-loc", "projection": "full"},
timeout=google.cloud.bigquery.retry.DEFAULT_GET_JOB_TIMEOUT,
),
# jobs.getQueryResults x2
mock.call(
method="GET",
path="/projects/PROJECT/queries/1",
query_params={"maxResults": 0, "location": "test-loc"},
timeout=None,
),
mock.call(
method="GET",
path="/projects/PROJECT/queries/1",
query_params={"maxResults": 0, "location": "test-loc"},
timeout=None,
),
# jobs.get -- is_job_done checking again
mock.call(
method="GET",
path="/projects/PROJECT/jobs/1",
query_params={"location": "test-loc", "projection": "full"},
timeout=google.cloud.bigquery.retry.DEFAULT_GET_JOB_TIMEOUT,
),
# jobs.getQueryResults
mock.call(
method="GET",
path="/projects/PROJECT/queries/1",
query_params={"maxResults": 0, "location": "test-loc"},
timeout=None,
),
],
)

tests/unit/test_job_retry.py:204:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self =
calls = [call(method='POST', path='/projects/PROJECT/jobs', data={'jobReference': {'jobId': '1', 'projectId': 'PROJECT'}, 'con...ethod='GET', path='/projects/PROJECT/queries/1', query_params={'maxResults': 0, 'location': 'test-loc'}, timeout=None)]
any_order = False

def assert_has_calls(self, calls, any_order=False):
"""assert the mock has been called with the specified calls.
The `mock_calls` list is checked for the calls.

If `any_order` is False (the default) then the calls must be
sequential. There can be extra calls before or after the
specified calls.

If `any_order` is True then the calls can be in any order, but
they must all appear in `mock_calls`."""
expected = [self._call_matcher(c) for c in calls]
cause = next((e for e in expected if isinstance(e, Exception)), None)
all_calls = _CallList(self._call_matcher(c) for c in self.mock_calls)
if not any_order:
if expected not in all_calls:
if cause is None:
problem = 'Calls not found.'
else:
problem = ('Error processing expected calls.\n'
'Errors: {}').format(
[e if isinstance(e, Exception) else None
for e in expected])
> raise AssertionError(
f'{problem}\n'
f'Expected: {_CallList(calls)}'
f'{self._calls_repr(prefix="Actual").rstrip(".")}'
) from cause
E AssertionError: Calls not found.
E Expected: [call(method='POST', path='/projects/PROJECT/jobs', data={'jobReference': {'jobId': '1', 'projectId': 'PROJECT'}, 'configuration': {'query': {'useLegacySql': False, 'query': 'select 1'}}}, timeout=None),
E call(method='GET', path='/projects/PROJECT/jobs/1', query_params={'location': 'test-loc', 'projection': 'full'}, timeout=128),
E call(method='GET', path='/projects/PROJECT/queries/1', query_params={'maxResults': 0, 'location': 'test-loc'}, timeout=None),
E call(method='GET', path='/projects/PROJECT/queries/1', query_params={'maxResults': 0, 'location': 'test-loc'}, timeout=None),
E call(method='GET', path='/projects/PROJECT/jobs/1', query_params={'location': 'test-loc', 'projection': 'full'}, timeout=128),
E call(method='GET', path='/projects/PROJECT/queries/1', query_params={'maxResults': 0, 'location': 'test-loc'}, timeout=None)]
E Actual: [call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='POST', path='/projects/PROJECT/jobs', data={'jobReference': {'jobId': '1', 'projectId': 'PROJECT'}, 'configuration': {'query': {'useLegacySql': False, 'query': 'select 1'}}}, timeout=None),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/PROJECT/jobs/1', query_params={'projection': 'full', 'location': 'test-loc'}, timeout=128),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/PROJECT/queries/1', query_params={'maxResults': 0, 'location': 'test-loc'}, timeout=None),
E call(method='GET', path='/projects/PROJECT/queries/1', query_params={'maxResults': 0, 'location': 'test-loc'}, timeout=None),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/PROJECT/jobs/1', query_params={'projection': 'full', 'location': 'test-loc'}, timeout=128),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/PROJECT/queries/1', query_params={'maxResults': 0, 'location': 'test-loc'}, timeout=None)]

/usr/local/lib/python3.12/unittest/mock.py:981: AssertionError
_ test_query_retry_with_default_retry_and_ambiguous_errors_only_retries_with_failed_job _

client =
monkeypatch = <_pytest.monkeypatch.MonkeyPatch object at 0x7f0c40ea9a30>

def test_query_retry_with_default_retry_and_ambiguous_errors_only_retries_with_failed_job(
client, monkeypatch
):
"""
Some errors like 'rateLimitExceeded' can be ambiguous. Make sure we only
retry the job when we know for sure that the job has failed for a retriable
reason. We can only be sure after a "successful" call to jobs.get to fetch
the failed job status.
"""
job_counter = 0

def make_job_id(*args, **kwargs):
nonlocal job_counter
job_counter += 1
return f"{job_counter}"

monkeypatch.setattr(_job_helpers, "make_job_id", make_job_id)

project = client.project
job_reference_1 = {"projectId": project, "jobId": "1", "location": "test-loc"}
job_reference_2 = {"projectId": project, "jobId": "2", "location": "test-loc"}
NUM_API_RETRIES = 2

# This error is modeled after a real customer exception in
# https://github.com/googleapis/python-bigquery/issues/707.
internal_error = google.api_core.exceptions.InternalServerError(
"Job failed just because...",
errors=[
{"reason": "internalError"},
],
)
responses = [
# jobs.insert
{"jobReference": job_reference_1, "status": {"state": "PENDING"}},
# jobs.get
{"jobReference": job_reference_1, "status": {"state": "RUNNING"}},
# jobs.getQueryResults x2
#
# Note: internalError is ambiguous in jobs.getQueryResults. The
# problem could be at the Google Frontend level or it could be because
# the job has failed due to some transient issues and the BigQuery
# REST API is translating the job failed status into failure HTTP
# codes.
#
# TODO(GH#1903): We shouldn't retry nearly this many times when we get
# ambiguous errors from jobs.getQueryResults.
# See: https://github.com/googleapis/python-bigquery/issues/1903
internal_error,
internal_error,
# jobs.get -- the job has failed
{
"jobReference": job_reference_1,
"status": {"state": "DONE", "errorResult": {"reason": "internalError"}},
},
# jobs.insert
{"jobReference": job_reference_2, "status": {"state": "PENDING"}},
# jobs.get
{"jobReference": job_reference_2, "status": {"state": "RUNNING"}},
# jobs.getQueryResults
{"jobReference": job_reference_2, "jobComplete": True},
# jobs.get
{"jobReference": job_reference_2, "status": {"state": "DONE"}},
]

conn = client._connection = make_connection(*responses)

with freezegun.freeze_time(
# Note: because of exponential backoff and a bit of jitter,
# NUM_API_RETRIES will get less accurate the greater the value.
# We add 1 because we know there will be at least some additional
# calls to fetch the time / sleep before the retry deadline is hit.
auto_tick_seconds=(
google.cloud.bigquery.retry._DEFAULT_RETRY_DEADLINE / NUM_API_RETRIES
)
+ 1,
):
job = client.query("select 1")
job.result()

> conn.api_request.assert_has_calls(
[
# jobs.insert
mock.call(
method="POST",
path="/projects/PROJECT/jobs",
data={
"jobReference": {"jobId": "1", "projectId": "PROJECT"},
"configuration": {
"query": {"useLegacySql": False, "query": "select 1"}
},
},
timeout=None,
),
# jobs.get
mock.call(
method="GET",
path="/projects/PROJECT/jobs/1",
query_params={"location": "test-loc", "projection": "full"},
timeout=google.cloud.bigquery.retry.DEFAULT_GET_JOB_TIMEOUT,
),
# jobs.getQueryResults x2
mock.call(
method="GET",
path="/projects/PROJECT/queries/1",
query_params={"maxResults": 0, "location": "test-loc"},
timeout=None,
),
mock.call(
method="GET",
path="/projects/PROJECT/queries/1",
query_params={"maxResults": 0, "location": "test-loc"},
timeout=None,
),
# jobs.get -- verify that the job has failed
mock.call(
method="GET",
path="/projects/PROJECT/jobs/1",
query_params={"location": "test-loc", "projection": "full"},
timeout=google.cloud.bigquery.retry.DEFAULT_GET_JOB_TIMEOUT,
),
# jobs.insert
mock.call(
method="POST",
path="/projects/PROJECT/jobs",
data={
"jobReference": {
# Make sure that we generated a new job ID.
"jobId": "2",
"projectId": "PROJECT",
},
"configuration": {
"query": {"useLegacySql": False, "query": "select 1"}
},
},
timeout=None,
),
# jobs.get
mock.call(
method="GET",
path="/projects/PROJECT/jobs/2",
query_params={"location": "test-loc", "projection": "full"},
timeout=google.cloud.bigquery.retry.DEFAULT_GET_JOB_TIMEOUT,
),
# jobs.getQueryResults
mock.call(
method="GET",
path="/projects/PROJECT/queries/2",
query_params={"maxResults": 0, "location": "test-loc"},
timeout=None,
),
# jobs.get
mock.call(
method="GET",
path="/projects/PROJECT/jobs/2",
query_params={"location": "test-loc", "projection": "full"},
timeout=google.cloud.bigquery.retry.DEFAULT_GET_JOB_TIMEOUT,
),
]
)

tests/unit/test_job_retry.py:335:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self =
calls = [call(method='POST', path='/projects/PROJECT/jobs', data={'jobReference': {'jobId': '1', 'projectId': 'PROJECT'}, 'con...'projectId': 'PROJECT'}, 'configuration': {'query': {'useLegacySql': False, 'query': 'select 1'}}}, timeout=None), ...]
any_order = False

def assert_has_calls(self, calls, any_order=False):
"""assert the mock has been called with the specified calls.
The `mock_calls` list is checked for the calls.

If `any_order` is False (the default) then the calls must be
sequential. There can be extra calls before or after the
specified calls.

If `any_order` is True then the calls can be in any order, but
they must all appear in `mock_calls`."""
expected = [self._call_matcher(c) for c in calls]
cause = next((e for e in expected if isinstance(e, Exception)), None)
all_calls = _CallList(self._call_matcher(c) for c in self.mock_calls)
if not any_order:
if expected not in all_calls:
if cause is None:
problem = 'Calls not found.'
else:
problem = ('Error processing expected calls.\n'
'Errors: {}').format(
[e if isinstance(e, Exception) else None
for e in expected])
> raise AssertionError(
f'{problem}\n'
f'Expected: {_CallList(calls)}'
f'{self._calls_repr(prefix="Actual").rstrip(".")}'
) from cause
E AssertionError: Calls not found.
E Expected: [call(method='POST', path='/projects/PROJECT/jobs', data={'jobReference': {'jobId': '1', 'projectId': 'PROJECT'}, 'configuration': {'query': {'useLegacySql': False, 'query': 'select 1'}}}, timeout=None),
E call(method='GET', path='/projects/PROJECT/jobs/1', query_params={'location': 'test-loc', 'projection': 'full'}, timeout=128),
E call(method='GET', path='/projects/PROJECT/queries/1', query_params={'maxResults': 0, 'location': 'test-loc'}, timeout=None),
E call(method='GET', path='/projects/PROJECT/queries/1', query_params={'maxResults': 0, 'location': 'test-loc'}, timeout=None),
E call(method='GET', path='/projects/PROJECT/jobs/1', query_params={'location': 'test-loc', 'projection': 'full'}, timeout=128),
E call(method='POST', path='/projects/PROJECT/jobs', data={'jobReference': {'jobId': '2', 'projectId': 'PROJECT'}, 'configuration': {'query': {'useLegacySql': False, 'query': 'select 1'}}}, timeout=None),
E call(method='GET', path='/projects/PROJECT/jobs/2', query_params={'location': 'test-loc', 'projection': 'full'}, timeout=128),
E call(method='GET', path='/projects/PROJECT/queries/2', query_params={'maxResults': 0, 'location': 'test-loc'}, timeout=None),
E call(method='GET', path='/projects/PROJECT/jobs/2', query_params={'location': 'test-loc', 'projection': 'full'}, timeout=128)]
E Actual: [call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='POST', path='/projects/PROJECT/jobs', data={'jobReference': {'jobId': '1', 'projectId': 'PROJECT'}, 'configuration': {'query': {'useLegacySql': False, 'query': 'select 1'}}}, timeout=None),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/PROJECT/jobs/1', query_params={'projection': 'full', 'location': 'test-loc'}, timeout=128),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/PROJECT/queries/1', query_params={'maxResults': 0, 'location': 'test-loc'}, timeout=None),
E call(method='GET', path='/projects/PROJECT/queries/1', query_params={'maxResults': 0, 'location': 'test-loc'}, timeout=None),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/PROJECT/jobs/1', query_params={'projection': 'full', 'location': 'test-loc'}, timeout=128),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='POST', path='/projects/PROJECT/jobs', data={'jobReference': {'jobId': '2', 'projectId': 'PROJECT'}, 'configuration': {'query': {'useLegacySql': False, 'query': 'select 1'}}}, timeout=None),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/PROJECT/jobs/2', query_params={'projection': 'full', 'location': 'test-loc'}, timeout=128),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/PROJECT/queries/2', query_params={'maxResults': 0, 'location': 'test-loc'}, timeout=None),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/PROJECT/jobs/2', query_params={'projection': 'full', 'location': 'test-loc'}, timeout=128)]

/usr/local/lib/python3.12/unittest/mock.py:981: AssertionError
_____________________ test_context_with_default_connection _____________________

monkeypatch = <_pytest.monkeypatch.MonkeyPatch object at 0x7f0c42b82270>

@pytest.mark.usefixtures("ipython_interactive")
@pytest.mark.skipif(pandas is None, reason="Requires `pandas`")
def test_context_with_default_connection(monkeypatch):
ip = IPython.get_ipython()
monkeypatch.setattr(bigquery, "bigquery_magics", None)
bigquery.load_ipython_extension(ip)
magics.context._credentials = None
magics.context._project = None
magics.context._connection = None

default_credentials = mock.create_autospec(
google.auth.credentials.Credentials, instance=True
)
credentials_patch = mock.patch(
"google.auth.default", return_value=(default_credentials, "project-from-env")
)
default_conn = make_connection(QUERY_RESOURCE, QUERY_RESULTS_RESOURCE)
conn_patch = mock.patch("google.cloud.bigquery.client.Connection", autospec=True)
list_rows_patch = mock.patch(
"google.cloud.bigquery.client.Client._list_rows_from_query_results",
return_value=google.cloud.bigquery.table._EmptyRowIterator(),
)

with conn_patch as conn, credentials_patch, list_rows_patch as list_rows:
conn.return_value = default_conn
ip.run_cell_magic("bigquery", "", QUERY_STRING)

# Check that query actually starts the job.
conn.assert_called()
list_rows.assert_called()
begin_call = mock.call(
method="POST",
path="/projects/project-from-env/jobs",
data=mock.ANY,
timeout=DEFAULT_TIMEOUT,
)
query_results_call = mock.call(
method="GET",
path=f"/projects/{PROJECT_ID}/queries/{JOB_ID}",
query_params=mock.ANY,
timeout=mock.ANY,
)
> default_conn.api_request.assert_has_calls([begin_call, query_results_call])

tests/unit/test_magics.py:198:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self =
calls = [call(method='POST', path='/projects/project-from-env/jobs', data=, timeout=None), call(method='GET', path='/projects/its-a-project-eh/queries/some-random-id', query_params=, timeout=)]
any_order = False

def assert_has_calls(self, calls, any_order=False):
"""assert the mock has been called with the specified calls.
The `mock_calls` list is checked for the calls.

If `any_order` is False (the default) then the calls must be
sequential. There can be extra calls before or after the
specified calls.

If `any_order` is True then the calls can be in any order, but
they must all appear in `mock_calls`."""
expected = [self._call_matcher(c) for c in calls]
cause = next((e for e in expected if isinstance(e, Exception)), None)
all_calls = _CallList(self._call_matcher(c) for c in self.mock_calls)
if not any_order:
if expected not in all_calls:
if cause is None:
problem = 'Calls not found.'
else:
problem = ('Error processing expected calls.\n'
'Errors: {}').format(
[e if isinstance(e, Exception) else None
for e in expected])
> raise AssertionError(
f'{problem}\n'
f'Expected: {_CallList(calls)}'
f'{self._calls_repr(prefix="Actual").rstrip(".")}'
) from cause
E AssertionError: Calls not found.
E Expected: [call(method='POST', path='/projects/project-from-env/jobs', data=, timeout=None),
E call(method='GET', path='/projects/its-a-project-eh/queries/some-random-id', query_params=, timeout=)]
E Actual: [call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='POST', path='/projects/project-from-env/jobs', data={'jobReference': {'jobId': 'e5169533-be61-4b95-99fb-e52665cbf0e1', 'projectId': 'project-from-env'}, 'configuration': {'query': {'queryParameters': [], 'useLegacySql': False, 'query': 'SELECT 42 AS the_answer FROM `life.the_universe.and_everything`;'}, 'dryRun': False}}, timeout=None),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/its-a-project-eh/queries/some-random-id', query_params={'maxResults': 0}, timeout=120)]

/usr/local/lib/python3.12/unittest/mock.py:981: AssertionError
----------------------------- Captured stdout call -----------------------------
Executing query with job ID: some-random-id
Query executing: 0.00s
Job ID some-random-id successfully executed
Query is running: 0%| |
_____________________ test_context_with_custom_connection ______________________

monkeypatch = <_pytest.monkeypatch.MonkeyPatch object at 0x7f0c42dc6150>

@pytest.mark.usefixtures("ipython_interactive")
@pytest.mark.skipif(pandas is None, reason="Requires `pandas`")
def test_context_with_custom_connection(monkeypatch):
ip = IPython.get_ipython()
monkeypatch.setattr(bigquery, "bigquery_magics", None)
bigquery.load_ipython_extension(ip)
magics.context._project = None
magics.context._credentials = None
context_conn = magics.context._connection = make_connection(
QUERY_RESOURCE, QUERY_RESULTS_RESOURCE
)

default_credentials = mock.create_autospec(
google.auth.credentials.Credentials, instance=True
)
credentials_patch = mock.patch(
"google.auth.default", return_value=(default_credentials, "project-from-env")
)
default_conn = make_connection()
conn_patch = mock.patch("google.cloud.bigquery.client.Connection", autospec=True)
list_rows_patch = mock.patch(
"google.cloud.bigquery.client.Client._list_rows_from_query_results",
return_value=google.cloud.bigquery.table._EmptyRowIterator(),
)

with conn_patch as conn, credentials_patch, list_rows_patch as list_rows:
conn.return_value = default_conn
ip.run_cell_magic("bigquery", "", QUERY_STRING)

list_rows.assert_called()
default_conn.api_request.assert_not_called()
begin_call = mock.call(
method="POST",
path="/projects/project-from-env/jobs",
data=mock.ANY,
timeout=DEFAULT_TIMEOUT,
)
query_results_call = mock.call(
method="GET",
path=f"/projects/{PROJECT_ID}/queries/{JOB_ID}",
query_params=mock.ANY,
timeout=mock.ANY,
)
> context_conn.api_request.assert_has_calls([begin_call, query_results_call])

tests/unit/test_magics.py:263:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self =
calls = [call(method='POST', path='/projects/project-from-env/jobs', data=, timeout=None), call(method='GET', path='/projects/its-a-project-eh/queries/some-random-id', query_params=, timeout=)]
any_order = False

def assert_has_calls(self, calls, any_order=False):
"""assert the mock has been called with the specified calls.
The `mock_calls` list is checked for the calls.

If `any_order` is False (the default) then the calls must be
sequential. There can be extra calls before or after the
specified calls.

If `any_order` is True then the calls can be in any order, but
they must all appear in `mock_calls`."""
expected = [self._call_matcher(c) for c in calls]
cause = next((e for e in expected if isinstance(e, Exception)), None)
all_calls = _CallList(self._call_matcher(c) for c in self.mock_calls)
if not any_order:
if expected not in all_calls:
if cause is None:
problem = 'Calls not found.'
else:
problem = ('Error processing expected calls.\n'
'Errors: {}').format(
[e if isinstance(e, Exception) else None
for e in expected])
> raise AssertionError(
f'{problem}\n'
f'Expected: {_CallList(calls)}'
f'{self._calls_repr(prefix="Actual").rstrip(".")}'
) from cause
E AssertionError: Calls not found.
E Expected: [call(method='POST', path='/projects/project-from-env/jobs', data=, timeout=None),
E call(method='GET', path='/projects/its-a-project-eh/queries/some-random-id', query_params=, timeout=)]
E Actual: [call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='POST', path='/projects/project-from-env/jobs', data={'jobReference': {'jobId': 'aebafefd-6fa0-4a00-85a7-3815f1750c21', 'projectId': 'project-from-env'}, 'configuration': {'query': {'queryParameters': [], 'useLegacySql': False, 'query': 'SELECT 42 AS the_answer FROM `life.the_universe.and_everything`;'}, 'dryRun': False}}, timeout=None),
E call.__code__.co_flags.__and__(128),
E call.__code__.co_flags.__and__().__bool__(),
E call(method='GET', path='/projects/its-a-project-eh/queries/some-random-id', query_params={'maxResults': 0}, timeout=120)]

/usr/local/lib/python3.12/unittest/mock.py:981: AssertionError
----------------------------- Captured stdout call -----------------------------
Executing query with job ID: some-random-id
Query executing: 0.00s
Job ID some-random-id successfully executed
Query is running: 0%| |
=============================== warnings summary ===============================
tests/unit/test_opentelemetry_tracing.py:45
/tmpfs/src/github/python-bigquery/tests/unit/test_opentelemetry_tracing.py:45: PytestRemovedIn9Warning: Marks applied to fixtures have no effect
See docs: https://docs.pytest.org/en/stable/deprecations.html#applying-a-mark-to-a-fixture-function
@pytest.mark.skipif(opentelemetry is None, reason="Require `opentelemetry`")

tests/unit/job/test_base.py::Test_AsyncJob::test__set_properties_w_creation_time
tests/unit/job/test_base.py::Test_AsyncJob::test__set_properties_w_end_time
tests/unit/job/test_base.py::Test_AsyncJob::test__set_properties_w_start_time
tests/unit/job/test_base.py::Test_AsyncJob::test_created
tests/unit/job/test_base.py::Test_AsyncJob::test_ended
tests/unit/job/test_base.py::Test_AsyncJob::test_started
/tmpfs/src/github/python-bigquery/tests/unit/job/test_base.py:334: DeprecationWarning: datetime.datetime.utcnow() is deprecated and scheduled for removal in a future version. Use timezone-aware objects to represent datetimes in UTC: datetime.datetime.now(datetime.UTC).
now = datetime.datetime.utcnow().replace(

tests/unit/job/test_base.py: 5 warnings
tests/unit/job/test_copy.py: 6 warnings
tests/unit/job/test_extract.py: 6 warnings
tests/unit/job/test_load.py: 14 warnings
tests/unit/job/test_query.py: 27 warnings
tests/unit/job/test_query_pandas.py: 15 warnings
tests/unit/test__job_helpers.py: 2 warnings
tests/unit/test_client.py: 128 warnings
tests/unit/test_create_dataset.py: 16 warnings
tests/unit/test_delete_dataset.py: 10 warnings
tests/unit/test_job_retry.py: 11 warnings
tests/unit/test_list_datasets.py: 4 warnings
tests/unit/test_list_jobs.py: 8 warnings
tests/unit/test_list_models.py: 9 warnings
tests/unit/test_list_projects.py: 4 warnings
tests/unit/test_list_routines.py: 9 warnings
tests/unit/test_list_tables.py: 13 warnings
tests/unit/test_magics.py: 8 warnings
/tmpfs/src/github/python-bigquery/.nox/unit-3-12/lib/python3.12/site-packages/google/api_core/retry/retry_unary.py:281: UserWarning: Using the synchronous google.api_core.retry.Retry with asynchronous calls may lead to unexpected results. Please use google.api_core.retry_async.AsyncRetry instead.
warnings.warn(_ASYNC_RETRY_WARNING)

tests/unit/job/test_copy.py: 11 warnings
tests/unit/job/test_extract.py: 9 warnings
tests/unit/job/test_load.py: 18 warnings
tests/unit/job/test_query.py: 34 warnings
/tmpfs/src/github/python-bigquery/tests/unit/job/helpers.py:109: DeprecationWarning: datetime.datetime.utcfromtimestamp() is deprecated and scheduled for removal in a future version. Use timezone-aware objects to represent datetimes in UTC: datetime.datetime.fromtimestamp(timestamp, datetime.UTC).
self.WHEN = datetime.datetime.utcfromtimestamp(self.WHEN_TS).replace(tzinfo=UTC)

tests/unit/test__pandas_helpers.py::test_dataframe_to_json_generator
/tmpfs/src/github/python-bigquery/tests/unit/test__pandas_helpers.py:886: DeprecationWarning: datetime.datetime.utcnow() is deprecated and scheduled for removal in a future version. Use timezone-aware objects to represent datetimes in UTC: datetime.datetime.now(datetime.UTC).
utcnow = datetime.datetime.utcnow()

tests/unit/test_client.py::TestClient::test_insert_rows_w_list_of_dictionaries
/tmpfs/src/github/python-bigquery/tests/unit/test_client.py:5700: DeprecationWarning: datetime.datetime.utcfromtimestamp() is deprecated and scheduled for removal in a future version. Use timezone-aware objects to represent datetimes in UTC: datetime.datetime.fromtimestamp(timestamp, datetime.UTC).
WHEN = datetime.datetime.utcfromtimestamp(WHEN_TS).replace(tzinfo=UTC)

tests/unit/test_client.py::TestClient::test_insert_rows_w_schema
/tmpfs/src/github/python-bigquery/tests/unit/test_client.py:5639: DeprecationWarning: datetime.datetime.utcfromtimestamp() is deprecated and scheduled for removal in a future version. Use timezone-aware objects to represent datetimes in UTC: datetime.datetime.fromtimestamp(timestamp, datetime.UTC).
WHEN = datetime.datetime.utcfromtimestamp(WHEN_TS).replace(tzinfo=UTC)

tests/unit/test_client.py::TestClient::test_list_rows
/tmpfs/src/github/python-bigquery/tests/unit/test_client.py:6752: DeprecationWarning: datetime.datetime.utcfromtimestamp() is deprecated and scheduled for removal in a future version. Use timezone-aware objects to represent datetimes in UTC: datetime.datetime.fromtimestamp(timestamp, datetime.UTC).
WHEN = datetime.datetime.utcfromtimestamp(WHEN_TS / 1e6).replace(tzinfo=UTC)

tests/unit/test_dataset.py::TestDataset::test_from_api_repr_bare
tests/unit/test_dataset.py::TestDataset::test_from_api_repr_missing_identity
tests/unit/test_dataset.py::TestDataset::test_from_api_repr_w_properties
/tmpfs/src/github/python-bigquery/tests/unit/test_dataset.py:668: DeprecationWarning: datetime.datetime.utcfromtimestamp() is deprecated and scheduled for removal in a future version. Use timezone-aware objects to represent datetimes in UTC: datetime.datetime.fromtimestamp(timestamp, datetime.UTC).
self.WHEN = datetime.datetime.utcfromtimestamp(self.WHEN_TS).replace(tzinfo=UTC)

tests/unit/test_magics.py: 59 warnings
/tmpfs/src/github/python-bigquery/google/cloud/bigquery/__init__.py:237: FutureWarning: %load_ext google.cloud.bigquery is deprecated. Install bigquery-magics package and use `%load_ext bigquery_magics`, instead.
warnings.warn(

tests/unit/test_query.py::Test_ScalarQueryParameter::test_to_api_repr_w_datetime_datetime
/tmpfs/src/github/python-bigquery/tests/unit/test_query.py:655: DeprecationWarning: datetime.datetime.utcnow() is deprecated and scheduled for removal in a future version. Use timezone-aware objects to represent datetimes in UTC: datetime.datetime.now(datetime.UTC).
now = datetime.datetime.utcnow()

tests/unit/test_query.py::Test_ScalarQueryParameter::test_to_api_repr_w_datetime_string
/tmpfs/src/github/python-bigquery/tests/unit/test_query.py:669: DeprecationWarning: datetime.datetime.utcnow() is deprecated and scheduled for removal in a future version. Use timezone-aware objects to represent datetimes in UTC: datetime.datetime.now(datetime.UTC).
now = datetime.datetime.utcnow()

tests/unit/test_query.py::Test_ScalarQueryParameter::test_to_api_repr_w_timestamp_micros
/tmpfs/src/github/python-bigquery/tests/unit/test_query.py:642: DeprecationWarning: datetime.datetime.utcnow() is deprecated and scheduled for removal in a future version. Use timezone-aware objects to represent datetimes in UTC: datetime.datetime.now(datetime.UTC).
now = datetime.datetime.utcnow()

tests/unit/test_query.py::Test_RangeQueryParameter::test_to_api_repr_w_datetime_datetime
/tmpfs/src/github/python-bigquery/tests/unit/test_query.py:1052: DeprecationWarning: datetime.datetime.utcnow() is deprecated and scheduled for removal in a future version. Use timezone-aware objects to represent datetimes in UTC: datetime.datetime.now(datetime.UTC).
now = datetime.datetime.utcnow()

tests/unit/test_query.py::Test_RangeQueryParameter::test_to_api_repr_w_timestamp_timestamp
/tmpfs/src/github/python-bigquery/tests/unit/test_query.py:1092: DeprecationWarning: datetime.datetime.utcnow() is deprecated and scheduled for removal in a future version. Use timezone-aware objects to represent datetimes in UTC: datetime.datetime.now(datetime.UTC).
now = datetime.datetime.utcnow()

tests/unit/test_table.py::TestTable::test_from_api_repr_bare
tests/unit/test_table.py::TestTable::test_from_api_repr_missing_identity
tests/unit/test_table.py::TestTable::test_from_api_repr_w_partial_streamingbuffer
tests/unit/test_table.py::TestTable::test_from_api_repr_w_properties
tests/unit/test_table.py::TestTable::test_from_api_with_encryption
/tmpfs/src/github/python-bigquery/tests/unit/test_table.py:395: DeprecationWarning: datetime.datetime.utcfromtimestamp() is deprecated and scheduled for removal in a future version. Use timezone-aware objects to represent datetimes in UTC: datetime.datetime.fromtimestamp(timestamp, datetime.UTC).
self.WHEN = datetime.datetime.utcfromtimestamp(self.WHEN_TS).replace(tzinfo=UTC)

tests/unit/test_table.py::TestTable::test_to_api_repr_w_unsetting_expiration
/tmpfs/src/github/python-bigquery/tests/unit/test_table.py:1170: PendingDeprecationWarning: This method will be deprecated in future versions. Please use Table.time_partitioning.expiration_ms instead.
table.partition_expiration = None

tests/unit/test_table.py::TestTableListItem::test_ctor
/tmpfs/src/github/python-bigquery/tests/unit/test_table.py:1553: DeprecationWarning: datetime.datetime.utcfromtimestamp() is deprecated and scheduled for removal in a future version. Use timezone-aware objects to represent datetimes in UTC: datetime.datetime.fromtimestamp(timestamp, datetime.UTC).
self.WHEN = datetime.datetime.utcfromtimestamp(self.WHEN_TS).replace(tzinfo=UTC)

tests/unit/test_table.py::TestRowIterator::test_to_arrow_ensure_bqstorage_client_wo_bqstorage
/tmpfs/src/github/python-bigquery/google/cloud/bigquery/table.py:1733: UserWarning: no bqstorage
warnings.warn(str(exc))

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=========================== short test summary info ============================
FAILED tests/unit/job/test_base.py::Test_AsyncJob::test_result_default_wo_state
FAILED tests/unit/job/test_base.py::Test_AsyncJob::test_result_w_retry_wo_state
FAILED tests/unit/job/test_query.py::TestQueryJob::test_result_begin_job_if_not_exist
FAILED tests/unit/job/test_query.py::TestQueryJob::test_result_reloads_job_state_until_done
FAILED tests/unit/job/test_query.py::TestQueryJob::test_result_w_custom_retry
FAILED tests/unit/job/test_query.py::TestQueryJob::test_result_w_page_size - ...
FAILED tests/unit/job/test_query.py::TestQueryJob::test_result_w_timeout_doesnt_raise
FAILED tests/unit/job/test_query.py::TestQueryJob::test_result_w_timeout_raises_concurrent_futures_timeout
FAILED tests/unit/job/test_query.py::TestQueryJob::test_result_with_done_job_calls_get_query_results
FAILED tests/unit/job/test_query.py::TestQueryJob::test_result_with_max_results
FAILED tests/unit/test_client.py::TestClient::test_create_routine_w_conflict_exists_ok
FAILED tests/unit/test_client.py::TestClient::test_create_table_alreadyexists_w_exists_ok_true
FAILED tests/unit/test_client.py::TestClient::test_list_rows_w_start_index_w_page_size
FAILED tests/unit/test_client.py::TestClient::test_query_and_wait_w_page_size_multiple_requests
FAILED tests/unit/test_create_dataset.py::test_create_dataset_alreadyexists_w_exists_ok_true
FAILED tests/unit/test_job_retry.py::test_retry_connection_error_with_default_retries_and_successful_first_job
FAILED tests/unit/test_job_retry.py::test_query_retry_with_default_retry_and_ambiguous_errors_only_retries_with_failed_job
FAILED tests/unit/test_magics.py::test_context_with_default_connection - Asse...
FAILED tests/unit/test_magics.py::test_context_with_custom_connection - Asser...
19 failed, 2366 passed, 24 skipped, 453 warnings in 93.89s (0:01:33)
nox > Command py.test --quiet '-W default::PendingDeprecationWarning' --cov=google/cloud/bigquery --cov=tests/unit --cov-append --cov-config=.coveragerc --cov-report= --cov-fail-under=0 tests/unit failed with exit code 1
nox > Session unit-3.12 failed.
nox > Running session cover
nox > Creating virtual environment (virtualenv) using python3.8 in .nox/cover
nox > python -m pip install coverage pytest-cov
nox > coverage report --show-missing --fail-under=100
Name Stmts Miss Branch BrPart Cover Missing
--------------------------------------------------------------------------------------------------------
google/cloud/bigquery/_helpers.py 369 0 184 0 100%
google/cloud/bigquery/_http.py 15 0 0 0 100%
google/cloud/bigquery/_job_helpers.py 146 0 38 0 100%
google/cloud/bigquery/_pandas_helpers.py 397 0 213 0 100%
google/cloud/bigquery/_pyarrow_helpers.py 28 0 4 1 97% 102->109
google/cloud/bigquery/_tqdm_helpers.py 57 0 20 0 100%
google/cloud/bigquery/_versions_helpers.py 93 0 24 0 100%
google/cloud/bigquery/client.py 884 0 334 0 100%
google/cloud/bigquery/dataset.py 382 0 95 0 100%
google/cloud/bigquery/dbapi/__init__.py 29 0 0 0 100%
google/cloud/bigquery/dbapi/_helpers.py 170 0 96 0 100%
google/cloud/bigquery/dbapi/connection.py 39 0 16 0 100%
google/cloud/bigquery/dbapi/cursor.py 179 0 64 0 100%
google/cloud/bigquery/dbapi/exceptions.py 10 0 0 0 100%
google/cloud/bigquery/dbapi/types.py 28 0 4 0 100%
google/cloud/bigquery/encryption_configuration.py 27 0 4 0 100%
google/cloud/bigquery/enums.py 123 0 0 0 100%
google/cloud/bigquery/exceptions.py 5 0 0 0 100%
google/cloud/bigquery/external_config.py 425 0 58 0 100%
google/cloud/bigquery/format_options.py 50 0 0 0 100%
google/cloud/bigquery/iam.py 5 0 0 0 100%
google/cloud/bigquery/job/__init__.py 34 0 0 0 100%
google/cloud/bigquery/job/base.py 344 0 74 0 100%
google/cloud/bigquery/job/copy_.py 105 0 20 0 100%
google/cloud/bigquery/job/extract.py 99 0 14 0 100%
google/cloud/bigquery/job/load.py 395 0 60 0 100%
google/cloud/bigquery/job/query.py 796 0 166 0 100%
google/cloud/bigquery/magics/__init__.py 2 0 0 0 100%
google/cloud/bigquery/magics/line_arg_parser/__init__.py 7 0 0 0 100%
google/cloud/bigquery/magics/line_arg_parser/exceptions.py 5 0 0 0 100%
google/cloud/bigquery/magics/line_arg_parser/lexer.py 29 0 4 0 100%
google/cloud/bigquery/magics/line_arg_parser/parser.py 214 0 68 0 100%
google/cloud/bigquery/magics/line_arg_parser/visitors.py 84 0 22 0 100%
google/cloud/bigquery/magics/magics.py 273 0 101 0 100%
google/cloud/bigquery/model.py 196 0 34 0 100%
google/cloud/bigquery/opentelemetry_tracing.py 65 0 30 0 100%
google/cloud/bigquery/query.py 453 0 126 0 100%
google/cloud/bigquery/retry.py 30 0 8 0 100%
google/cloud/bigquery/routine/__init__.py 7 0 0 0 100%
google/cloud/bigquery/routine/routine.py 285 0 54 0 100%
google/cloud/bigquery/schema.py 180 0 74 0 100%
google/cloud/bigquery/standard_sql.py 182 0 58 0 100%
google/cloud/bigquery/table.py 1004 0 338 0 100%
google/cloud/bigquery/version.py 1 0 0 0 100%
tests/unit/__init__.py 0 0 0 0 100%
tests/unit/_helpers/__init__.py 0 0 0 0 100%
tests/unit/_helpers/test_from_json.py 22 0 0 0 100%
tests/unit/conftest.py 27 0 4 0 100%
tests/unit/helpers.py 38 0 0 0 100%
tests/unit/job/__init__.py 0 0 0 0 100%
tests/unit/job/helpers.py 84 0 20 0 100%
tests/unit/job/test_base.py 750 0 2 0 100%
tests/unit/job/test_copy.py 260 0 10 0 100%
tests/unit/job/test_extract.py 242 0 10 0 100%
tests/unit/job/test_load.py 508 0 36 0 100%
tests/unit/job/test_load_config.py 564 0 0 0 100%
tests/unit/job/test_query.py 1103 1 54 0 99% 1166
tests/unit/job/test_query_config.py 189 0 0 0 100%
tests/unit/job/test_query_pandas.py 445 0 18 0 100%
tests/unit/job/test_query_stats.py 327 0 0 0 100%
tests/unit/line_arg_parser/__init__.py 0 0 0 0 100%
tests/unit/line_arg_parser/test_lexer.py 12 0 0 0 100%
tests/unit/line_arg_parser/test_parser.py 101 0 0 0 100%
tests/unit/line_arg_parser/test_visitors.py 13 0 0 0 100%
tests/unit/model/__init__.py 0 0 0 0 100%
tests/unit/model/test_model.py 178 0 0 0 100%
tests/unit/model/test_model_reference.py 72 0 0 0 100%
tests/unit/routine/__init__.py 0 0 0 0 100%
tests/unit/routine/test_remote_function_options.py 59 0 0 0 100%
tests/unit/routine/test_routine.py 193 0 0 0 100%
tests/unit/routine/test_routine_argument.py 47 0 0 0 100%
tests/unit/routine/test_routine_reference.py 72 0 0 0 100%
tests/unit/test__helpers.py 971 0 16 0 100%
tests/unit/test__http.py 95 0 2 0 100%
tests/unit/test__job_helpers.py 253 0 10 0 100%
tests/unit/test__pandas_helpers.py 673 2 54 0 99% 27-28
tests/unit/test__pyarrow_helpers.py 12 0 0 0 100%
tests/unit/test__versions_helpers.py 146 0 0 0 100%
tests/unit/test_client.py 4104 0 108 0 100%
tests/unit/test_create_dataset.py 232 0 0 0 100%
tests/unit/test_dataset.py 777 0 22 0 100%
tests/unit/test_dbapi__helpers.py 227 0 16 0 100%
tests/unit/test_dbapi_connection.py 154 0 6 0 100%
tests/unit/test_dbapi_cursor.py 452 0 18 0 100%
tests/unit/test_dbapi_types.py 26 0 0 0 100%
tests/unit/test_delete_dataset.py 36 0 0 0 100%
tests/unit/test_encryption_configuration.py 75 0 0 0 100%
tests/unit/test_external_config.py 450 0 0 0 100%
tests/unit/test_format_options.py 37 0 0 0 100%
tests/unit/test_job_retry.py 143 0 10 0 100%
tests/unit/test_legacy_types.py 15 2 0 0 87% 22-23
tests/unit/test_list_datasets.py 47 0 2 0 100%
tests/unit/test_list_jobs.py 100 0 6 0 100%
tests/unit/test_list_models.py 39 0 2 0 100%
tests/unit/test_list_projects.py 47 0 2 0 100%
tests/unit/test_list_routines.py 39 0 2 0 100%
tests/unit/test_list_tables.py 76 0 4 0 100%
tests/unit/test_magics.py 1135 0 14 0 100%
tests/unit/test_opentelemetry_tracing.py 137 0 4 0 100%
tests/unit/test_packaging.py 16 0 0 0 100%
tests/unit/test_query.py 1156 0 4 0 100%
tests/unit/test_retry.py 84 0 0 0 100%
tests/unit/test_schema.py 484 0 8 0 100%
tests/unit/test_signature_compatibility.py 27 0 0 0 100%
tests/unit/test_standard_sql_types.py 281 0 0 0 100%
tests/unit/test_table.py 2948 3 136 2 99% 1996, 3817-3818
tests/unit/test_table_arrow.py 18 0 4 0 100%
tests/unit/test_table_pandas.py 84 0 2 0 100%
--------------------------------------------------------------------------------------------------------
TOTAL 29153 8 3011 3 99%
Coverage failure: total of 99 is less than fail-under=100
nox > Command coverage report --show-missing --fail-under=100 failed with exit code 2
nox > Session cover failed.
nox > Running session docs
nox > Creating virtual environment (virtualenv) using python3.9 in .nox/docs
nox > python -m pip install sphinxcontrib-applehelp==1.0.4 sphinxcontrib-devhelp==1.0.2 sphinxcontrib-htmlhelp==2.0.1 sphinxcontrib-qthelp==1.0.3 sphinxcontrib-serializinghtml==1.1.5 sphinx==4.5.0 alabaster recommonmark
nox > python -m pip install google-cloud-storage
nox > python -m pip install -e '.[all]'
nox > sphinx-build -W -T -N -b html -d docs/_build/doctrees/ docs/ docs/_build/html/
Running Sphinx v4.5.0
making output directory... done
[autosummary] generating autosummary for: README.rst, UPGRADING.md, bigquery/legacy_proto_types.rst, bigquery/standard_sql.rst, changelog.md, dbapi.rst, design/index.rst, design/query-retries.md, enums.rst, format_options.rst, ..., reference.rst, summary_overview.md, usage/client.rst, usage/datasets.rst, usage/encryption.rst, usage/index.rst, usage/jobs.rst, usage/pandas.rst, usage/queries.rst, usage/tables.rst
loading intersphinx inventory from https://python.readthedocs.org/en/latest/objects.inv...
loading intersphinx inventory from https://googleapis.dev/python/google-auth/latest/objects.inv...
loading intersphinx inventory from https://googleapis.dev/python/google-api-core/latest/objects.inv...
loading intersphinx inventory from https://grpc.github.io/grpc/python/objects.inv...
loading intersphinx inventory from https://proto-plus-python.readthedocs.io/en/latest/objects.inv...
loading intersphinx inventory from https://googleapis.dev/python/protobuf/latest/objects.inv...
loading intersphinx inventory from https://dateutil.readthedocs.io/en/latest/objects.inv...
loading intersphinx inventory from https://geopandas.org/objects.inv...
loading intersphinx inventory from https://pandas.pydata.org/pandas-docs/stable/objects.inv...
intersphinx inventory has moved: https://python.readthedocs.org/en/latest/objects.inv -> https://python.readthedocs.io/en/latest/objects.inv
intersphinx inventory has moved: https://geopandas.org/objects.inv -> https://geopandas.org/en/stable/objects.inv
building [mo]: targets for 0 po files that are out of date
building [html]: targets for 24 source files that are out of date
updating environment: [new config] 24 added, 0 changed, 0 removed
reading sources... [ 4%] README
reading sources... [ 8%] UPGRADING
/tmpfs/src/github/python-bigquery/.nox/docs/lib/python3.9/site-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document
warn("Container node skipped: type={0}".format(mdnode.t))
reading sources... [ 12%] bigquery/legacy_proto_types
reading sources... [ 16%] bigquery/standard_sql
reading sources... [ 20%] changelog
/tmpfs/src/github/python-bigquery/.nox/docs/lib/python3.9/site-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document
warn("Container node skipped: type={0}".format(mdnode.t))
reading sources... [ 25%] dbapi
reading sources... [ 29%] design/index
reading sources... [ 33%] design/query-retries
/tmpfs/src/github/python-bigquery/.nox/docs/lib/python3.9/site-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document
warn("Container node skipped: type={0}".format(mdnode.t))
reading sources... [ 37%] enums
reading sources... [ 41%] format_options
reading sources... [ 45%] index
reading sources... [ 50%] job_base
reading sources... [ 54%] magics
reading sources... [ 58%] query
reading sources... [ 62%] reference
reading sources... [ 66%] summary_overview
/tmpfs/src/github/python-bigquery/.nox/docs/lib/python3.9/site-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document
warn("Container node skipped: type={0}".format(mdnode.t))
reading sources... [ 70%] usage/client
reading sources... [ 75%] usage/datasets
reading sources... [ 79%] usage/encryption
reading sources... [ 83%] usage/index
reading sources... [ 87%] usage/jobs
reading sources... [ 91%] usage/pandas
reading sources... [ 95%] usage/queries
reading sources... [100%] usage/tables

looking for now-outdated files... none found
pickling environment... done
checking consistency... done
preparing documents... done
writing output... [ 4%] README
writing output... [ 8%] UPGRADING
writing output... [ 12%] bigquery/legacy_proto_types
writing output... [ 16%] bigquery/standard_sql
writing output... [ 20%] changelog
writing output... [ 25%] dbapi
writing output... [ 29%] design/index
writing output... [ 33%] design/query-retries
writing output... [ 37%] enums
writing output... [ 41%] format_options
writing output... [ 45%] index
writing output... [ 50%] job_base
writing output... [ 54%] magics
writing output... [ 58%] query
writing output... [ 62%] reference
writing output... [ 66%] summary_overview
writing output... [ 70%] usage/client
writing output... [ 75%] usage/datasets
writing output... [ 79%] usage/encryption
writing output... [ 83%] usage/index
writing output... [ 87%] usage/jobs
writing output... [ 91%] usage/pandas
writing output... [ 95%] usage/queries
writing output... [100%] usage/tables

generating indices... genindex py-modindex done
highlighting module code... [ 3%] google.cloud.bigquery.client
highlighting module code... [ 7%] google.cloud.bigquery.dataset
highlighting module code... [ 10%] google.cloud.bigquery.dbapi.connection
highlighting module code... [ 14%] google.cloud.bigquery.dbapi.cursor
highlighting module code... [ 17%] google.cloud.bigquery.dbapi.exceptions
highlighting module code... [ 21%] google.cloud.bigquery.dbapi.types
highlighting module code... [ 25%] google.cloud.bigquery.encryption_configuration
highlighting module code... [ 28%] google.cloud.bigquery.enums
highlighting module code... [ 32%] google.cloud.bigquery.external_config
highlighting module code... [ 35%] google.cloud.bigquery.format_options
highlighting module code... [ 39%] google.cloud.bigquery.job.base
highlighting module code... [ 42%] google.cloud.bigquery.job.copy_
highlighting module code... [ 46%] google.cloud.bigquery.job.extract
highlighting module code... [ 50%] google.cloud.bigquery.job.load
highlighting module code... [ 53%] google.cloud.bigquery.job.query
highlighting module code... [ 57%] google.cloud.bigquery.magics.magics
highlighting module code... [ 60%] google.cloud.bigquery.model
highlighting module code... [ 64%] google.cloud.bigquery.query
highlighting module code... [ 67%] google.cloud.bigquery.routine.routine
highlighting module code... [ 71%] google.cloud.bigquery.schema
highlighting module code... [ 75%] google.cloud.bigquery.standard_sql
highlighting module code... [ 78%] google.cloud.bigquery.table
highlighting module code... [ 82%] google.cloud.bigquery_v2.types.encryption_config
highlighting module code... [ 85%] google.cloud.bigquery_v2.types.model
highlighting module code... [ 89%] google.cloud.bigquery_v2.types.model_reference
highlighting module code... [ 92%] google.cloud.bigquery_v2.types.standard_sql
highlighting module code... [ 96%] google.cloud.bigquery_v2.types.table_reference
highlighting module code... [100%] google.cloud.client

writing additional pages... search done
copying static files... done
copying extra files... done
dumping search index in English (code: en)... done
dumping object inventory... done
build succeeded.

The HTML pages are in docs/_build/html.
Session ran in 88 seconds (0:01:28)
nox > Session docs was successful.
nox > Running session docfx
nox > Creating virtual environment (virtualenv) using python3.10 in .nox/docfx
nox > python -m pip install -e .
nox > python -m pip install sphinxcontrib-applehelp==1.0.4 sphinxcontrib-devhelp==1.0.2 sphinxcontrib-htmlhelp==2.0.1 sphinxcontrib-qthelp==1.0.3 sphinxcontrib-serializinghtml==1.1.5 gcp-sphinx-docfx-yaml alabaster recommonmark
nox > sphinx-build -T -N -D extensions=sphinx.ext.autodoc,sphinx.ext.autosummary,docfx_yaml.extension,sphinx.ext.intersphinx,sphinx.ext.coverage,sphinx.ext.napoleon,sphinx.ext.todo,sphinx.ext.viewcode,recommonmark -b html -d docs/_build/doctrees/ docs/ docs/_build/html/
Running Sphinx v4.5.0
WARNING: while setting up extension sphinx.ext.todo: directive 'todo' is already registered, it will be overridden
making output directory... done
[autosummary] generating autosummary for: README.rst, UPGRADING.md, bigquery/legacy_proto_types.rst, bigquery/standard_sql.rst, changelog.md, dbapi.rst, design/index.rst, design/query-retries.md, enums.rst, format_options.rst, ..., reference.rst, summary_overview.md, usage/client.rst, usage/datasets.rst, usage/encryption.rst, usage/index.rst, usage/jobs.rst, usage/pandas.rst, usage/queries.rst, usage/tables.rst
Failed to import google.cloud.bigquery.magics.magics.
Possible hints:
* ImportError: This module can only be loaded in IPython.
* AttributeError: module 'google.cloud.bigquery' has no attribute 'magics'
Retrieving repository metadata.
Successfully retrieved repository metadata.
Running sphinx-build with Markdown first...
Running Sphinx v4.5.0
making output directory... done
[autosummary] generating autosummary for: README.rst, UPGRADING.md, bigquery/legacy_proto_types.rst, bigquery/standard_sql.rst, changelog.md, dbapi.rst, design/index.rst, design/query-retries.md, enums.rst, format_options.rst, ..., reference.rst, summary_overview.md, usage/client.rst, usage/datasets.rst, usage/encryption.rst, usage/index.rst, usage/jobs.rst, usage/pandas.rst, usage/queries.rst, usage/tables.rst
Failed to import google.cloud.bigquery.magics.magics.
Possible hints:
* AttributeError: module 'google.cloud.bigquery' has no attribute 'magics'
* ImportError: This module can only be loaded in IPython.
loading intersphinx inventory from https://python.readthedocs.org/en/latest/objects.inv...
loading intersphinx inventory from https://googleapis.dev/python/google-auth/latest/objects.inv...
loading intersphinx inventory from https://googleapis.dev/python/google-api-core/latest/objects.inv...
loading intersphinx inventory from https://grpc.github.io/grpc/python/objects.inv...
loading intersphinx inventory from https://proto-plus-python.readthedocs.io/en/latest/objects.inv...
loading intersphinx inventory from https://googleapis.dev/python/protobuf/latest/objects.inv...
loading intersphinx inventory from https://dateutil.readthedocs.io/en/latest/objects.inv...
loading intersphinx inventory from https://geopandas.org/objects.inv...
loading intersphinx inventory from https://pandas.pydata.org/pandas-docs/stable/objects.inv...
intersphinx inventory has moved: https://geopandas.org/objects.inv -> https://geopandas.org/en/stable/objects.inv
intersphinx inventory has moved: https://python.readthedocs.org/en/latest/objects.inv -> https://python.readthedocs.io/en/latest/objects.inv
building [mo]: targets for 0 po files that are out of date
building [markdown]: targets for 24 source files that are out of date
updating environment: [new config] 24 added, 0 changed, 0 removed
reading sources... [ 4%] README
reading sources... [ 8%] UPGRADING
/tmpfs/src/github/python-bigquery/.nox/docfx/lib/python3.10/site-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document
warn("Container node skipped: type={0}".format(mdnode.t))
reading sources... [ 12%] bigquery/legacy_proto_types
reading sources... [ 16%] bigquery/standard_sql
reading sources... [ 20%] changelog
/tmpfs/src/github/python-bigquery/.nox/docfx/lib/python3.10/site-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document
warn("Container node skipped: type={0}".format(mdnode.t))
reading sources... [ 25%] dbapi
reading sources... [ 29%] design/index
reading sources... [ 33%] design/query-retries
/tmpfs/src/github/python-bigquery/.nox/docfx/lib/python3.10/site-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document
warn("Container node skipped: type={0}".format(mdnode.t))
reading sources... [ 37%] enums
reading sources... [ 41%] format_options
reading sources... [ 45%] index
reading sources... [ 50%] job_base
reading sources... [ 54%] magics
reading sources... [ 58%] query
reading sources... [ 62%] reference
reading sources... [ 66%] summary_overview
/tmpfs/src/github/python-bigquery/.nox/docfx/lib/python3.10/site-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document
warn("Container node skipped: type={0}".format(mdnode.t))
reading sources... [ 70%] usage/client
reading sources... [ 75%] usage/datasets
reading sources... [ 79%] usage/encryption
reading sources... [ 83%] usage/index
reading sources... [ 87%] usage/jobs
reading sources... [ 91%] usage/pandas
reading sources... [ 95%] usage/queries
reading sources... [100%] usage/tables

WARNING: autodoc: failed to import module 'magics.magics' from module 'google.cloud.bigquery'; the following exception was raised:
This module can only be loaded in IPython.
looking for now-outdated files... none found
pickling environment... done
checking consistency... done
preparing documents... done
writing output... [ 4%] README
writing output... [ 8%] UPGRADING
writing output... [ 12%] bigquery/legacy_proto_types
writing output... [ 16%] bigquery/standard_sql
writing output... [ 20%] changelog
writing output... [ 25%] dbapi
writing output... [ 29%] design/index
writing output... [ 33%] design/query-retries
writing output... [ 37%] enums
writing output... [ 41%] format_options
writing output... [ 45%] index
writing output... [ 50%] job_base
writing output... [ 54%] magics
writing output... [ 58%] query
writing output... [ 62%] reference
writing output... [ 66%] summary_overview
writing output... [ 70%] usage/client
writing output... [ 75%] usage/datasets
writing output... [ 79%] usage/encryption
writing output... [ 83%] usage/index
writing output... [ 87%] usage/jobs
writing output... [ 91%] usage/pandas
writing output... [ 95%] usage/queries
writing output... [100%] usage/tables

/tmpfs/src/github/python-bigquery/docs/bigquery/standard_sql.rst:: WARNING: The desc_returns element not yet supported in Markdown.
/tmpfs/src/github/python-bigquery/docs/format_options.rst:: WARNING: The desc_returns element not yet supported in Markdown.
/tmpfs/src/github/python-bigquery/docs/job_base.rst:: WARNING: The desc_returns element not yet supported in Markdown.
/tmpfs/src/github/python-bigquery/docs/query.rst:: WARNING: The desc_returns element not yet supported in Markdown.
/tmpfs/src/github/python-bigquery/docs/reference.rst:: WARNING: The desc_returns element not yet supported in Markdown.
build succeeded, 6 warnings.

The markdown files are in docs/_build/markdown.
Completed running sphinx-build with Markdown files.
loading intersphinx inventory from https://python.readthedocs.org/en/latest/objects.inv...
loading intersphinx inventory from https://googleapis.dev/python/google-auth/latest/objects.inv...
loading intersphinx inventory from https://googleapis.dev/python/google-api-core/latest/objects.inv...
loading intersphinx inventory from https://grpc.github.io/grpc/python/objects.inv...
loading intersphinx inventory from https://proto-plus-python.readthedocs.io/en/latest/objects.inv...
loading intersphinx inventory from https://googleapis.dev/python/protobuf/latest/objects.inv...
loading intersphinx inventory from https://dateutil.readthedocs.io/en/latest/objects.inv...
loading intersphinx inventory from https://geopandas.org/objects.inv...
loading intersphinx inventory from https://pandas.pydata.org/pandas-docs/stable/objects.inv...
intersphinx inventory has moved: https://geopandas.org/objects.inv -> https://geopandas.org/en/stable/objects.inv
intersphinx inventory has moved: https://python.readthedocs.org/en/latest/objects.inv -> https://python.readthedocs.io/en/latest/objects.inv
building [mo]: targets for 0 po files that are out of date
building [html]: targets for 24 source files that are out of date
updating environment: [new config] 24 added, 0 changed, 0 removed
reading sources... [ 4%] README
reading sources... [ 8%] UPGRADING
/tmpfs/src/github/python-bigquery/.nox/docfx/lib/python3.10/site-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document
warn("Container node skipped: type={0}".format(mdnode.t))
reading sources... [ 12%] bigquery/legacy_proto_types
Can't get argspec for : google.cloud.bigquery_v2.types.DeleteModelRequest.__delattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.DeleteModelRequest.__eq__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.DeleteModelRequest.__ne__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.DeleteModelRequest.__setattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.EncryptionConfiguration.__delattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.EncryptionConfiguration.__eq__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.EncryptionConfiguration.__ne__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.EncryptionConfiguration.__setattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.GetModelRequest.__delattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.GetModelRequest.__eq__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.GetModelRequest.__ne__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.GetModelRequest.__setattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.ListModelsRequest.__delattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.ListModelsRequest.__eq__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.ListModelsRequest.__ne__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.ListModelsRequest.__setattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.ListModelsResponse.__delattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.ListModelsResponse.__eq__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.ListModelsResponse.__ne__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.ListModelsResponse.__setattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.AggregateClassificationMetrics.__delattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.AggregateClassificationMetrics.__eq__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.AggregateClassificationMetrics.__ne__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.AggregateClassificationMetrics.__setattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.ArimaFittingMetrics.__delattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.ArimaFittingMetrics.__eq__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.ArimaFittingMetrics.__ne__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.ArimaFittingMetrics.__setattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.ArimaForecastingMetrics.ArimaSingleModelForecastingMetrics.__delattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.ArimaForecastingMetrics.ArimaSingleModelForecastingMetrics.__eq__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.ArimaForecastingMetrics.ArimaSingleModelForecastingMetrics.__ne__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.ArimaForecastingMetrics.ArimaSingleModelForecastingMetrics.__setattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.ArimaForecastingMetrics.__delattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.ArimaForecastingMetrics.__eq__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.ArimaForecastingMetrics.__ne__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.ArimaForecastingMetrics.__setattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.ArimaOrder.__delattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.ArimaOrder.__eq__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.ArimaOrder.__ne__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.ArimaOrder.__setattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.BinaryClassificationMetrics.BinaryConfusionMatrix.__delattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.BinaryClassificationMetrics.BinaryConfusionMatrix.__eq__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.BinaryClassificationMetrics.BinaryConfusionMatrix.__ne__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.BinaryClassificationMetrics.BinaryConfusionMatrix.__setattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.BinaryClassificationMetrics.__delattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.BinaryClassificationMetrics.__eq__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.BinaryClassificationMetrics.__ne__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.BinaryClassificationMetrics.__setattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.ClusteringMetrics.Cluster.FeatureValue.CategoricalValue.CategoryCount.__delattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.ClusteringMetrics.Cluster.FeatureValue.CategoricalValue.CategoryCount.__eq__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.ClusteringMetrics.Cluster.FeatureValue.CategoricalValue.CategoryCount.__ne__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.ClusteringMetrics.Cluster.FeatureValue.CategoricalValue.CategoryCount.__setattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.ClusteringMetrics.Cluster.FeatureValue.CategoricalValue.__delattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.ClusteringMetrics.Cluster.FeatureValue.CategoricalValue.__eq__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.ClusteringMetrics.Cluster.FeatureValue.CategoricalValue.__ne__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.ClusteringMetrics.Cluster.FeatureValue.CategoricalValue.__setattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.ClusteringMetrics.Cluster.FeatureValue.__delattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.ClusteringMetrics.Cluster.FeatureValue.__eq__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.ClusteringMetrics.Cluster.FeatureValue.__ne__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.ClusteringMetrics.Cluster.FeatureValue.__setattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.ClusteringMetrics.Cluster.__delattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.ClusteringMetrics.Cluster.__eq__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.ClusteringMetrics.Cluster.__ne__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.ClusteringMetrics.Cluster.__setattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.ClusteringMetrics.__delattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.ClusteringMetrics.__eq__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.ClusteringMetrics.__ne__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.ClusteringMetrics.__setattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.DataSplitResult.__delattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.DataSplitResult.__eq__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.DataSplitResult.__ne__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.DataSplitResult.__setattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.EvaluationMetrics.__delattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.EvaluationMetrics.__eq__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.EvaluationMetrics.__ne__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.EvaluationMetrics.__setattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.GlobalExplanation.Explanation.__delattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.GlobalExplanation.Explanation.__eq__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.GlobalExplanation.Explanation.__ne__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.GlobalExplanation.Explanation.__setattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.GlobalExplanation.__delattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.GlobalExplanation.__eq__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.GlobalExplanation.__ne__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.GlobalExplanation.__setattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.KmeansEnums.__delattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.KmeansEnums.__eq__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.KmeansEnums.__ne__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.KmeansEnums.__setattr__. 'NoneType' object is not iterable
Can't inspect type : google.cloud.bigquery_v2.types.Model.LabelsEntry
Can't get argspec for : google.cloud.bigquery_v2.types.Model.LabelsEntry.__delattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.LabelsEntry.__eq__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.LabelsEntry.__ne__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.LabelsEntry.__setattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.MultiClassClassificationMetrics.ConfusionMatrix.Entry.__delattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.MultiClassClassificationMetrics.ConfusionMatrix.Entry.__eq__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.MultiClassClassificationMetrics.ConfusionMatrix.Entry.__ne__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.MultiClassClassificationMetrics.ConfusionMatrix.Entry.__setattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.MultiClassClassificationMetrics.ConfusionMatrix.Row.__delattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.MultiClassClassificationMetrics.ConfusionMatrix.Row.__eq__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.MultiClassClassificationMetrics.ConfusionMatrix.Row.__ne__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.MultiClassClassificationMetrics.ConfusionMatrix.Row.__setattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.MultiClassClassificationMetrics.ConfusionMatrix.__delattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.MultiClassClassificationMetrics.ConfusionMatrix.__eq__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.MultiClassClassificationMetrics.ConfusionMatrix.__ne__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.MultiClassClassificationMetrics.ConfusionMatrix.__setattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.MultiClassClassificationMetrics.__delattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.MultiClassClassificationMetrics.__eq__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.MultiClassClassificationMetrics.__ne__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.MultiClassClassificationMetrics.__setattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.RankingMetrics.__delattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.RankingMetrics.__eq__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.RankingMetrics.__ne__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.RankingMetrics.__setattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.RegressionMetrics.__delattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.RegressionMetrics.__eq__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.RegressionMetrics.__ne__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.RegressionMetrics.__setattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.SeasonalPeriod.__delattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.SeasonalPeriod.__eq__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.SeasonalPeriod.__ne__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.SeasonalPeriod.__setattr__. 'NoneType' object is not iterable
Could not process the attribute. Please check the docstring:
Information about a single iteration of the training run.


.. attribute:: index


Index of the iteration, 0 based.


:type: google.protobuf.wrappers_pb2.Int32Value


.. attribute:: duration_ms


Time taken to run the iteration in
milliseconds.


:type: google.protobuf.wrappers_pb2.Int64Value


.. attribute:: training_loss


Loss computed on the training data at the end
of iteration.


:type: google.protobuf.wrappers_pb2.DoubleValue


.. attribute:: eval_loss


Loss computed on the eval data at the end of
iteration.


:type: google.protobuf.wrappers_pb2.DoubleValue


.. attribute:: learn_rate


Learn rate used for this iteration.


:type: float


.. attribute:: cluster_infos


Information about top clusters for clustering
models.


:type: Sequence[google.cloud.bigquery_v2.types.Model.TrainingRun.IterationResult.ClusterInfo]


.. attribute:: arima_result





:type: google.cloud.bigquery_v2.types.Model.TrainingRun.IterationResult.ArimaResult

Can't get argspec for : google.cloud.bigquery_v2.types.Model.TrainingRun.IterationResult.ArimaResult.ArimaCoefficients.__delattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.TrainingRun.IterationResult.ArimaResult.ArimaCoefficients.__eq__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.TrainingRun.IterationResult.ArimaResult.ArimaCoefficients.__ne__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.TrainingRun.IterationResult.ArimaResult.ArimaCoefficients.__setattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.TrainingRun.IterationResult.ArimaResult.ArimaModelInfo.__delattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.TrainingRun.IterationResult.ArimaResult.ArimaModelInfo.__eq__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.TrainingRun.IterationResult.ArimaResult.ArimaModelInfo.__ne__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.TrainingRun.IterationResult.ArimaResult.ArimaModelInfo.__setattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.TrainingRun.IterationResult.ArimaResult.__delattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.TrainingRun.IterationResult.ArimaResult.__eq__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.TrainingRun.IterationResult.ArimaResult.__ne__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.TrainingRun.IterationResult.ArimaResult.__setattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.TrainingRun.IterationResult.ClusterInfo.__delattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.TrainingRun.IterationResult.ClusterInfo.__eq__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.TrainingRun.IterationResult.ClusterInfo.__ne__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.TrainingRun.IterationResult.ClusterInfo.__setattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.TrainingRun.IterationResult.__delattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.TrainingRun.IterationResult.__eq__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.TrainingRun.IterationResult.__ne__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.TrainingRun.IterationResult.__setattr__. 'NoneType' object is not iterable
Can't inspect type : google.cloud.bigquery_v2.types.Model.TrainingRun.TrainingOptions.LabelClassWeightsEntry
Can't get argspec for : google.cloud.bigquery_v2.types.Model.TrainingRun.TrainingOptions.LabelClassWeightsEntry.__delattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.TrainingRun.TrainingOptions.LabelClassWeightsEntry.__eq__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.TrainingRun.TrainingOptions.LabelClassWeightsEntry.__ne__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.TrainingRun.TrainingOptions.LabelClassWeightsEntry.__setattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.TrainingRun.TrainingOptions.__delattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.TrainingRun.TrainingOptions.__eq__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.TrainingRun.TrainingOptions.__ne__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.TrainingRun.TrainingOptions.__setattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.TrainingRun.__delattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.TrainingRun.__eq__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.TrainingRun.__ne__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.TrainingRun.__setattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.__delattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.__eq__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.__ne__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.Model.__setattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.ModelReference.__delattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.ModelReference.__eq__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.ModelReference.__ne__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.ModelReference.__setattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.PatchModelRequest.__delattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.PatchModelRequest.__eq__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.PatchModelRequest.__ne__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.PatchModelRequest.__setattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.StandardSqlDataType.__delattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.StandardSqlDataType.__eq__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.StandardSqlDataType.__ne__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.StandardSqlDataType.__setattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.StandardSqlField.__delattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.StandardSqlField.__eq__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.StandardSqlField.__ne__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.StandardSqlField.__setattr__. 'NoneType' object is not iterable
Could not process the attribute. Please check the docstring:
.. attribute:: fields





:type: Sequence[google.cloud.bigquery_v2.types.StandardSqlField]

Can't get argspec for : google.cloud.bigquery_v2.types.StandardSqlStructType.__delattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.StandardSqlStructType.__eq__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.StandardSqlStructType.__ne__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.StandardSqlStructType.__setattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.StandardSqlTableType.__delattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.StandardSqlTableType.__eq__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.StandardSqlTableType.__ne__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.StandardSqlTableType.__setattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.TableReference.__delattr__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.TableReference.__eq__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.TableReference.__ne__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery_v2.types.TableReference.__setattr__. 'NoneType' object is not iterable
reading sources... [ 16%] bigquery/standard_sql
Skip inspecting for property: google.cloud.bigquery.standard_sql.StandardSqlDataType.array_element_type
Can't get argspec for : google.cloud.bigquery.standard_sql.StandardSqlDataType.from_api_repr. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.standard_sql.StandardSqlDataType.range_element_type
Skip inspecting for property: google.cloud.bigquery.standard_sql.StandardSqlDataType.struct_type
Can't get argspec for : google.cloud.bigquery.standard_sql.StandardSqlDataType.to_api_repr. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.standard_sql.StandardSqlDataType.type_kind
Can't get argspec for : google.cloud.bigquery.standard_sql.StandardSqlField.from_api_repr. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.standard_sql.StandardSqlField.name
Can't get argspec for : google.cloud.bigquery.standard_sql.StandardSqlField.to_api_repr. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.standard_sql.StandardSqlField.type
Skip inspecting for property: google.cloud.bigquery.standard_sql.StandardSqlStructType.fields
Can't get argspec for : google.cloud.bigquery.standard_sql.StandardSqlStructType.from_api_repr. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.standard_sql.StandardSqlStructType.to_api_repr. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.standard_sql.StandardSqlTableType. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.standard_sql.StandardSqlTableType.columns
Can't get argspec for : google.cloud.bigquery.standard_sql.StandardSqlTableType.from_api_repr. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.standard_sql.StandardSqlTableType.to_api_repr. 'NoneType' object is not iterable
reading sources... [ 20%] changelog
/tmpfs/src/github/python-bigquery/.nox/docfx/lib/python3.10/site-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document
warn("Container node skipped: type={0}".format(mdnode.t))
reading sources... [ 25%] dbapi
Can't get argspec for : google.cloud.bigquery.dbapi.Binary. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.dbapi.Connection.close. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.dbapi.Connection.commit. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.dbapi.Connection.cursor. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.dbapi.Cursor. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.dbapi.Cursor.close. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.dbapi.Cursor.execute. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.dbapi.Cursor.executemany. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.dbapi.Cursor.fetchall. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.dbapi.Cursor.fetchmany. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.dbapi.Cursor.fetchone. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.dbapi.Cursor.query_job
Can't get argspec for : google.cloud.bigquery.dbapi.Cursor.setinputsizes. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.dbapi.Cursor.setoutputsize. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.dbapi.DataError. unsupported callable
Can't get argspec for : google.cloud.bigquery.dbapi.DataError.with_traceback. unsupported callable
Can't inspect type : google.cloud.bigquery.dbapi.DataError.with_traceback
Can't get argspec for : google.cloud.bigquery.dbapi.DatabaseError. unsupported callable
Can't get argspec for : google.cloud.bigquery.dbapi.DatabaseError.with_traceback. unsupported callable
Can't inspect type : google.cloud.bigquery.dbapi.DatabaseError.with_traceback
Can't get argspec for : google.cloud.bigquery.dbapi.DateFromTicks. 'NoneType' object is not iterable
Can't inspect type : google.cloud.bigquery.dbapi.DateFromTicks
Can't get argspec for : google.cloud.bigquery.dbapi.Error. unsupported callable
Can't get argspec for : google.cloud.bigquery.dbapi.Error.with_traceback. unsupported callable
Can't inspect type : google.cloud.bigquery.dbapi.Error.with_traceback
Can't get argspec for : google.cloud.bigquery.dbapi.IntegrityError. unsupported callable
Can't get argspec for : google.cloud.bigquery.dbapi.IntegrityError.with_traceback. unsupported callable
Can't inspect type : google.cloud.bigquery.dbapi.IntegrityError.with_traceback
Can't get argspec for : google.cloud.bigquery.dbapi.InterfaceError. unsupported callable
Can't get argspec for : google.cloud.bigquery.dbapi.InterfaceError.with_traceback. unsupported callable
Can't inspect type : google.cloud.bigquery.dbapi.InterfaceError.with_traceback
Can't get argspec for : google.cloud.bigquery.dbapi.InternalError. unsupported callable
Can't get argspec for : google.cloud.bigquery.dbapi.InternalError.with_traceback. unsupported callable
Can't inspect type : google.cloud.bigquery.dbapi.InternalError.with_traceback
Can't get argspec for : google.cloud.bigquery.dbapi.NotSupportedError. unsupported callable
Can't get argspec for : google.cloud.bigquery.dbapi.NotSupportedError.with_traceback. unsupported callable
Can't inspect type : google.cloud.bigquery.dbapi.NotSupportedError.with_traceback
Can't get argspec for : google.cloud.bigquery.dbapi.OperationalError. unsupported callable
Can't get argspec for : google.cloud.bigquery.dbapi.OperationalError.with_traceback. unsupported callable
Can't inspect type : google.cloud.bigquery.dbapi.OperationalError.with_traceback
Can't get argspec for : google.cloud.bigquery.dbapi.ProgrammingError. unsupported callable
Can't get argspec for : google.cloud.bigquery.dbapi.ProgrammingError.with_traceback. unsupported callable
Can't inspect type : google.cloud.bigquery.dbapi.ProgrammingError.with_traceback
Can't get argspec for : google.cloud.bigquery.dbapi.TimestampFromTicks. unsupported callable
Can't inspect type : google.cloud.bigquery.dbapi.TimestampFromTicks
Can't get argspec for : google.cloud.bigquery.dbapi.Warning. unsupported callable
Can't get argspec for : google.cloud.bigquery.dbapi.Warning.with_traceback. unsupported callable
Can't inspect type : google.cloud.bigquery.dbapi.Warning.with_traceback
reading sources... [ 29%] design/index
reading sources... [ 33%] design/query-retries
/tmpfs/src/github/python-bigquery/.nox/docfx/lib/python3.10/site-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document
warn("Container node skipped: type={0}".format(mdnode.t))
reading sources... [ 37%] enums
Can't inspect type : google.cloud.bigquery.enums.Compression.DEFLATE
Can't inspect type : google.cloud.bigquery.enums.Compression.GZIP
Can't inspect type : google.cloud.bigquery.enums.Compression.NONE
Can't inspect type : google.cloud.bigquery.enums.Compression.SNAPPY
Can't inspect type : google.cloud.bigquery.enums.Compression.ZSTD
Can't get argspec for : google.cloud.bigquery.enums.CreateDisposition. 'NoneType' object is not iterable
Can't inspect type : google.cloud.bigquery.enums.CreateDisposition.CREATE_IF_NEEDED
Can't inspect type : google.cloud.bigquery.enums.CreateDisposition.CREATE_NEVER
Can't get argspec for : google.cloud.bigquery.enums.DecimalTargetType. 'NoneType' object is not iterable
Can't inspect type : google.cloud.bigquery.enums.DecimalTargetType.BIGNUMERIC
Can't inspect type : google.cloud.bigquery.enums.DecimalTargetType.NUMERIC
Can't inspect type : google.cloud.bigquery.enums.DecimalTargetType.STRING
Can't inspect type : google.cloud.bigquery.enums.DefaultPandasDTypes.BOOL_DTYPE
Can't inspect type : google.cloud.bigquery.enums.DefaultPandasDTypes.DATE_DTYPE
Can't inspect type : google.cloud.bigquery.enums.DefaultPandasDTypes.INT_DTYPE
Can't inspect type : google.cloud.bigquery.enums.DefaultPandasDTypes.RANGE_DATETIME_DTYPE
Can't inspect type : google.cloud.bigquery.enums.DefaultPandasDTypes.RANGE_DATE_DTYPE
Can't inspect type : google.cloud.bigquery.enums.DefaultPandasDTypes.RANGE_TIMESTAMP_DTYPE
Can't inspect type : google.cloud.bigquery.enums.DefaultPandasDTypes.TIME_DTYPE
Can't get argspec for : google.cloud.bigquery.enums.DestinationFormat. 'NoneType' object is not iterable
Can't inspect type : google.cloud.bigquery.enums.DestinationFormat.AVRO
Can't inspect type : google.cloud.bigquery.enums.DestinationFormat.CSV
Can't inspect type : google.cloud.bigquery.enums.DestinationFormat.NEWLINE_DELIMITED_JSON
Can't inspect type : google.cloud.bigquery.enums.DestinationFormat.PARQUET
Can't get argspec for : google.cloud.bigquery.enums.DeterminismLevel. 'NoneType' object is not iterable
Can't inspect type : google.cloud.bigquery.enums.DeterminismLevel.DETERMINISM_LEVEL_UNSPECIFIED
Can't inspect type : google.cloud.bigquery.enums.DeterminismLevel.DETERMINISTIC
Can't inspect type : google.cloud.bigquery.enums.DeterminismLevel.NOT_DETERMINISTIC
Can't get argspec for : google.cloud.bigquery.enums.Encoding. 'NoneType' object is not iterable
Can't inspect type : google.cloud.bigquery.enums.Encoding.ISO_8859_1
Can't inspect type : google.cloud.bigquery.enums.Encoding.UTF_8
Can't get argspec for : google.cloud.bigquery.enums.KeyResultStatementKind. 'NoneType' object is not iterable
Can't inspect type : google.cloud.bigquery.enums.QueryApiMethod.INSERT
Can't inspect type : google.cloud.bigquery.enums.QueryApiMethod.QUERY
Can't get argspec for : google.cloud.bigquery.enums.QueryPriority. 'NoneType' object is not iterable
Can't inspect type : google.cloud.bigquery.enums.QueryPriority.BATCH
Can't inspect type : google.cloud.bigquery.enums.QueryPriority.INTERACTIVE
Can't get argspec for : google.cloud.bigquery.enums.SchemaUpdateOption. 'NoneType' object is not iterable
Can't inspect type : google.cloud.bigquery.enums.SchemaUpdateOption.ALLOW_FIELD_ADDITION
Can't inspect type : google.cloud.bigquery.enums.SchemaUpdateOption.ALLOW_FIELD_RELAXATION
Can't get argspec for : google.cloud.bigquery.enums.SourceFormat. 'NoneType' object is not iterable
Can't inspect type : google.cloud.bigquery.enums.SourceFormat.AVRO
Can't inspect type : google.cloud.bigquery.enums.SourceFormat.CSV
Can't inspect type : google.cloud.bigquery.enums.SourceFormat.DATASTORE_BACKUP
Can't inspect type : google.cloud.bigquery.enums.SourceFormat.NEWLINE_DELIMITED_JSON
Can't inspect type : google.cloud.bigquery.enums.SourceFormat.ORC
Can't inspect type : google.cloud.bigquery.enums.SourceFormat.PARQUET
Can't get argspec for : google.cloud.bigquery.enums.WriteDisposition. 'NoneType' object is not iterable
Can't inspect type : google.cloud.bigquery.enums.WriteDisposition.WRITE_APPEND
Can't inspect type : google.cloud.bigquery.enums.WriteDisposition.WRITE_EMPTY
Can't inspect type : google.cloud.bigquery.enums.WriteDisposition.WRITE_TRUNCATE
reading sources... [ 41%] format_options
Can't get argspec for : google.cloud.bigquery.format_options.AvroOptions. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.format_options.AvroOptions.from_api_repr. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.format_options.AvroOptions.to_api_repr. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.format_options.AvroOptions.use_avro_logical_types
Can't get argspec for : google.cloud.bigquery.format_options.ParquetOptions. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.format_options.ParquetOptions.enable_list_inference
Skip inspecting for property: google.cloud.bigquery.format_options.ParquetOptions.enum_as_string
Can't get argspec for : google.cloud.bigquery.format_options.ParquetOptions.from_api_repr. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.format_options.ParquetOptions.map_target_type
Can't get argspec for : google.cloud.bigquery.format_options.ParquetOptions.to_api_repr. 'NoneType' object is not iterable
reading sources... [ 45%] index
reading sources... [ 50%] job_base
Can't get argspec for : google.cloud.bigquery.job.base.ReservationUsage. 'NoneType' object is not iterable
Can't inspect type : google.cloud.bigquery.job.base.ReservationUsage
Can't get argspec for : google.cloud.bigquery.job.base.ReservationUsage. 'NoneType' object is not iterable
Can't inspect type : google.cloud.bigquery.job.base.ReservationUsage
Can't get argspec for : google.cloud.bigquery.job.base.ReservationUsage.count. 'NoneType' object is not iterable
Can't inspect type : google.cloud.bigquery.job.base.ReservationUsage.count
Can't inspect type : google.cloud.bigquery.job.base.ReservationUsage.index
Can't inspect type : google.cloud.bigquery.job.base.ReservationUsage.name
Can't inspect type : google.cloud.bigquery.job.base.ReservationUsage.slot_ms
Can't get argspec for : google.cloud.bigquery.job.base.ScriptStackFrame. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.job.base.ScriptStackFrame.end_column
Skip inspecting for property: google.cloud.bigquery.job.base.ScriptStackFrame.end_line
Skip inspecting for property: google.cloud.bigquery.job.base.ScriptStackFrame.procedure_id
Skip inspecting for property: google.cloud.bigquery.job.base.ScriptStackFrame.start_column
Skip inspecting for property: google.cloud.bigquery.job.base.ScriptStackFrame.start_line
Skip inspecting for property: google.cloud.bigquery.job.base.ScriptStackFrame.text
Can't get argspec for : google.cloud.bigquery.job.base.ScriptStatistics. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.job.base.ScriptStatistics.evaluation_kind
Skip inspecting for property: google.cloud.bigquery.job.base.ScriptStatistics.stack_frames
Can't get argspec for : google.cloud.bigquery.job.base.SessionInfo. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.job.base.SessionInfo.session_id
Can't get argspec for : google.cloud.bigquery.job.base.TransactionInfo. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.job.base.TransactionInfo. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.job.base.TransactionInfo.count. 'NoneType' object is not iterable
Can't inspect type : google.cloud.bigquery.job.base.TransactionInfo.count
Can't inspect type : google.cloud.bigquery.job.base.TransactionInfo.index
Can't inspect type : google.cloud.bigquery.job.base.TransactionInfo.transaction_id
Can't get argspec for : google.cloud.bigquery.job.base.UnknownJob. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.job.base.UnknownJob.add_done_callback. 'NoneType' object is not iterable
Could not format the given code:
Cannot parse: 1:90: def cancel(client=None, retry: typing.Optional[google.api_core.retry.retry_unary.Retry] = , timeout: typing.Optional[float] = None) -> bool: pass)
Can't get argspec for : google.cloud.bigquery.job.base.UnknownJob.cancelled. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.job.base.UnknownJob.configuration
Skip inspecting for property: google.cloud.bigquery.job.base.UnknownJob.created
Could not format the given code:
Cannot parse: 1:58: def done(retry: google.api_core.retry.retry_unary.Retry = , timeout: typing.Optional[float] = 128, reload: bool = True) -> bool: pass)
Could not parse argument information for retry.
Skip inspecting for property: google.cloud.bigquery.job.base.UnknownJob.ended
Skip inspecting for property: google.cloud.bigquery.job.base.UnknownJob.error_result
Skip inspecting for property: google.cloud.bigquery.job.base.UnknownJob.errors
Skip inspecting for property: google.cloud.bigquery.job.base.UnknownJob.etag
Could not format the given code:
Cannot parse: 1:22: def exception(timeout=): pass)
Could not format the given code:
Cannot parse: 1:73: def exists(client=None, retry: google.api_core.retry.retry_unary.Retry = , timeout: typing.Optional[float] = None) -> bool: pass)
Could not parse argument information for retry.
Can't get argspec for : google.cloud.bigquery.job.base.UnknownJob.from_api_repr. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.job.base.UnknownJob.job_id
Skip inspecting for property: google.cloud.bigquery.job.base.UnknownJob.job_type
Skip inspecting for property: google.cloud.bigquery.job.base.UnknownJob.labels
Skip inspecting for property: google.cloud.bigquery.job.base.UnknownJob.location
Skip inspecting for property: google.cloud.bigquery.job.base.UnknownJob.num_child_jobs
Skip inspecting for property: google.cloud.bigquery.job.base.UnknownJob.parent_job_id
Skip inspecting for property: google.cloud.bigquery.job.base.UnknownJob.path
Skip inspecting for property: google.cloud.bigquery.job.base.UnknownJob.project
Could not format the given code:
Cannot parse: 1:73: def reload(client=None, retry: google.api_core.retry.retry_unary.Retry = , timeout: typing.Optional[float] = 128): pass)
Could not parse argument information for retry.
Skip inspecting for property: google.cloud.bigquery.job.base.UnknownJob.reservation_usage
Could not format the given code:
Cannot parse: 1:77: def result(retry: typing.Optional[google.api_core.retry.retry_unary.Retry] = , timeout: typing.Optional[float] = None) -> google.cloud.bigquery.job.base._AsyncJob: pass)
Can't get argspec for : google.cloud.bigquery.job.base.UnknownJob.running. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.job.base.UnknownJob.script_statistics
Skip inspecting for property: google.cloud.bigquery.job.base.UnknownJob.self_link
Skip inspecting for property: google.cloud.bigquery.job.base.UnknownJob.session_info
Can't get argspec for : google.cloud.bigquery.job.base.UnknownJob.set_exception. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.job.base.UnknownJob.set_result. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.job.base.UnknownJob.started
Skip inspecting for property: google.cloud.bigquery.job.base.UnknownJob.state
Can't get argspec for : google.cloud.bigquery.job.base.UnknownJob.to_api_repr. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.job.base.UnknownJob.transaction_info
Skip inspecting for property: google.cloud.bigquery.job.base.UnknownJob.user_email
reading sources... [ 54%] magics
reading sources... [ 58%] query
Can't get argspec for : google.cloud.bigquery.query.ArrayQueryParameter. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.query.ArrayQueryParameter.from_api_repr. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.query.ArrayQueryParameter.positional. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.query.ArrayQueryParameter.to_api_repr. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.query.ArrayQueryParameterType. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.query.ArrayQueryParameterType.from_api_repr. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.query.ArrayQueryParameterType.to_api_repr. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.query.ConnectionProperty.from_api_repr. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.query.ConnectionProperty.key
Can't get argspec for : google.cloud.bigquery.query.ConnectionProperty.to_api_repr. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.query.ConnectionProperty.value
Can't get argspec for : google.cloud.bigquery.query.RangeQueryParameter.from_api_repr. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.query.RangeQueryParameter.to_api_repr. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.query.RangeQueryParameterType. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.query.RangeQueryParameterType.from_api_repr. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.query.RangeQueryParameterType.to_api_repr. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.query.RangeQueryParameterType.with_name. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.query.ScalarQueryParameter. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.query.ScalarQueryParameter.from_api_repr. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.query.ScalarQueryParameter.positional. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.query.ScalarQueryParameter.to_api_repr. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.query.ScalarQueryParameterType. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.query.ScalarQueryParameterType.from_api_repr. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.query.ScalarQueryParameterType.to_api_repr. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.query.ScalarQueryParameterType.with_name. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.query.SqlParameterScalarTypes. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.query.StructQueryParameter. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.query.StructQueryParameter.from_api_repr. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.query.StructQueryParameter.positional. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.query.StructQueryParameter.to_api_repr. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.query.StructQueryParameterType. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.query.StructQueryParameterType.from_api_repr. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.query.StructQueryParameterType.to_api_repr. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.query.UDFResource. 'NoneType' object is not iterable
reading sources... [ 62%] reference
Can't inspect type : google.cloud.bigquery.client.Client.SCOPE
Can't get argspec for : google.cloud.bigquery.client.Client.__getstate__. 'NoneType' object is not iterable
Could not format the given code:
Cannot parse: 1:154: def cancel_job(job_id: str, project: typing.Optional[str] = None, location: typing.Optional[str] = None, retry: google.api_core.retry.retry_unary.Retry = , timeout: typing.Optional[float] = None) -> typing.Union[google.cloud.bigquery.job.load.LoadJob, google.cloud.bigquery.job.copy_.CopyJob, google.cloud.bigquery.job.extract.ExtractJob, google.cloud.bigquery.job.query.QueryJob]: pass)
Can't get argspec for : google.cloud.bigquery.client.Client.close. 'NoneType' object is not iterable
Could not format the given code:
Cannot parse: 1:767: def copy_table(sources: typing.Union[google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, str, typing.Sequence[typing.Union[google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, str]]], destination: typing.Union[google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, str], job_id: typing.Optional[str] = None, job_id_prefix: typing.Optional[str] = None, location: typing.Optional[str] = None, project: typing.Optional[str] = None, job_config: typing.Optional[google.cloud.bigquery.job.copy_.CopyJobConfig] = None, retry: google.api_core.retry.retry_unary.Retry = , timeout: typing.Optional[float] = None) -> google.cloud.bigquery.job.copy_.CopyJob: pass)
Could not format the given code:
Cannot parse: 1:255: def create_dataset(dataset: typing.Union[str, google.cloud.bigquery.dataset.Dataset, google.cloud.bigquery.dataset.DatasetReference, google.cloud.bigquery.dataset.DatasetListItem], exists_ok: bool = False, retry: google.api_core.retry.retry_unary.Retry = , timeout: typing.Optional[float] = None) -> google.cloud.bigquery.dataset.Dataset: pass)
Could not format the given code:
Cannot parse: 1:82: def create_job(job_config: dict, retry: google.api_core.retry.retry_unary.Retry = , timeout: typing.Optional[float] = None) -> typing.Union[google.cloud.bigquery.job.load.LoadJob, google.cloud.bigquery.job.copy_.CopyJob, google.cloud.bigquery.job.extract.ExtractJob, google.cloud.bigquery.job.query.QueryJob]: pass)
Could not format the given code:
Cannot parse: 1:149: def create_routine(routine: google.cloud.bigquery.routine.routine.Routine, exists_ok: bool = False, retry: google.api_core.retry.retry_unary.Retry = , timeout: typing.Optional[float] = None) -> google.cloud.bigquery.routine.routine.Routine: pass)
Could not format the given code:
Cannot parse: 1:239: def create_table(table: typing.Union[str, google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem], exists_ok: bool = False, retry: google.api_core.retry.retry_unary.Retry = , timeout: typing.Optional[float] = None) -> google.cloud.bigquery.table.Table: pass)
Skip inspecting for property: google.cloud.bigquery.client.Client.default_load_job_config
Skip inspecting for property: google.cloud.bigquery.client.Client.default_query_job_config
Could not format the given code:
Cannot parse: 1:261: def delete_dataset(dataset: typing.Union[google.cloud.bigquery.dataset.Dataset, google.cloud.bigquery.dataset.DatasetReference, google.cloud.bigquery.dataset.DatasetListItem, str], delete_contents: bool = False, retry: google.api_core.retry.retry_unary.Retry = , timeout: typing.Optional[float] = None, not_found_ok: bool = False) -> None: pass)
Could not format the given code:
Cannot parse: 1:346: def delete_job_metadata(job_id: typing.Union[str, google.cloud.bigquery.job.load.LoadJob, google.cloud.bigquery.job.copy_.CopyJob, google.cloud.bigquery.job.extract.ExtractJob, google.cloud.bigquery.job.query.QueryJob], project: typing.Optional[str] = None, location: typing.Optional[str] = None, retry: google.api_core.retry.retry_unary.Retry = , timeout: typing.Optional[float] = None, not_found_ok: bool = False): pass)
Could not format the given code:
Cannot parse: 1:171: def delete_model(model: typing.Union[google.cloud.bigquery.model.Model, google.cloud.bigquery.model.ModelReference, str], retry: google.api_core.retry.retry_unary.Retry = , timeout: typing.Optional[float] = None, not_found_ok: bool = False) -> None: pass)
Could not format the given code:
Cannot parse: 1:199: def delete_routine(routine: typing.Union[google.cloud.bigquery.routine.routine.Routine, google.cloud.bigquery.routine.routine.RoutineReference, str], retry: google.api_core.retry.retry_unary.Retry = , timeout: typing.Optional[float] = None, not_found_ok: bool = False) -> None: pass)
Could not format the given code:
Cannot parse: 1:214: def delete_table(table: typing.Union[google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, str], retry: google.api_core.retry.retry_unary.Retry = , timeout: typing.Optional[float] = None, not_found_ok: bool = False) -> None: pass)
Could not format the given code:
Cannot parse: 1:600: def extract_table(source: typing.Union[google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, google.cloud.bigquery.model.Model, google.cloud.bigquery.model.ModelReference, str], destination_uris: typing.Union[str, typing.Sequence[str]], job_id: typing.Optional[str] = None, job_id_prefix: typing.Optional[str] = None, location: typing.Optional[str] = None, project: typing.Optional[str] = None, job_config: typing.Optional[google.cloud.bigquery.job.extract.ExtractJobConfig] = None, retry: google.api_core.retry.retry_unary.Retry = , timeout: typing.Optional[float] = None, source_type: str = 'Table') -> google.cloud.bigquery.job.extract.ExtractJob: pass)
Can't get argspec for : google.cloud.bigquery.client.Client.from_service_account_info. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.client.Client.from_service_account_json. 'NoneType' object is not iterable
Could not format the given code:
Cannot parse: 1:145: def get_dataset(dataset_ref: typing.Union[google.cloud.bigquery.dataset.DatasetReference, str], retry: google.api_core.retry.retry_unary.Retry = , timeout: typing.Optional[float] = None) -> google.cloud.bigquery.dataset.Dataset: pass)
Could not format the given code:
Cannot parse: 1:251: def get_iam_policy(table: typing.Union[google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, str], requested_policy_version: int = 1, retry: google.api_core.retry.retry_unary.Retry = , timeout: typing.Optional[float] = None) -> google.api_core.iam.Policy: pass)
Could not format the given code:
Cannot parse: 1:334: def get_job(job_id: typing.Union[str, google.cloud.bigquery.job.load.LoadJob, google.cloud.bigquery.job.copy_.CopyJob, google.cloud.bigquery.job.extract.ExtractJob, google.cloud.bigquery.job.query.QueryJob], project: typing.Optional[str] = None, location: typing.Optional[str] = None, retry: google.api_core.retry.retry_unary.Retry = , timeout: typing.Optional[float] = 128) -> typing.Union[google.cloud.bigquery.job.load.LoadJob, google.cloud.bigquery.job.copy_.CopyJob, google.cloud.bigquery.job.extract.ExtractJob, google.cloud.bigquery.job.query.QueryJob, google.cloud.bigquery.job.base.UnknownJob]: pass)
Could not format the given code:
Cannot parse: 1:137: def get_model(model_ref: typing.Union[google.cloud.bigquery.model.ModelReference, str], retry: google.api_core.retry.retry_unary.Retry = , timeout: typing.Optional[float] = None) -> google.cloud.bigquery.model.Model: pass)
Could not format the given code:
Cannot parse: 1:200: def get_routine(routine_ref: typing.Union[google.cloud.bigquery.routine.routine.Routine, google.cloud.bigquery.routine.routine.RoutineReference, str], retry: google.api_core.retry.retry_unary.Retry = , timeout: typing.Optional[float] = None) -> google.cloud.bigquery.routine.routine.Routine: pass)
Could not format the given code:
Cannot parse: 1:117: def get_service_account_email(project: typing.Optional[str] = None, retry: google.api_core.retry.retry_unary.Retry = , timeout: typing.Optional[float] = None) -> str: pass)
Could not format the given code:
Cannot parse: 1:211: def get_table(table: typing.Union[google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, str], retry: google.api_core.retry.retry_unary.Retry = , timeout: typing.Optional[float] = None) -> google.cloud.bigquery.table.Table: pass)
Could not format the given code:
Cannot parse: 1:573: def insert_rows_json(table: typing.Union[google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, str], json_rows: typing.Sequence[typing.Mapping[str, typing.Any]], row_ids: typing.Optional[typing.Union[typing.Iterable[typing.Optional[str]], google.cloud.bigquery.enums.AutoRowIDs]] = AutoRowIDs.GENERATE_UUID, skip_invalid_rows: typing.Optional[bool] = None, ignore_unknown_values: typing.Optional[bool] = None, template_suffix: typing.Optional[str] = None, retry: google.api_core.retry.retry_unary.Retry = , timeout: typing.Optional[float] = None) -> typing.Sequence[dict]: pass)
Can't get argspec for : google.cloud.bigquery.client.Client.job_from_resource. 'NoneType' object is not iterable
Could not format the given code:
Cannot parse: 1:252: def list_datasets(project: typing.Optional[str] = None, include_all: bool = False, filter: typing.Optional[str] = None, max_results: typing.Optional[int] = None, page_token: typing.Optional[str] = None, retry: google.api_core.retry.retry_unary.Retry = , timeout: typing.Optional[float] = None, page_size: typing.Optional[int] = None) -> google.api_core.page_iterator.Iterator: pass)
Could not format the given code:
Cannot parse: 1:365: def list_jobs(project: typing.Optional[str] = None, parent_job: typing.Optional[typing.Union[google.cloud.bigquery.job.query.QueryJob, str]] = None, max_results: typing.Optional[int] = None, page_token: typing.Optional[str] = None, all_users: typing.Optional[bool] = None, state_filter: typing.Optional[str] = None, retry: google.api_core.retry.retry_unary.Retry = , timeout: typing.Optional[float] = None, min_creation_time: typing.Optional[datetime.datetime] = None, max_creation_time: typing.Optional[datetime.datetime] = None, page_size: typing.Optional[int] = None) -> google.api_core.page_iterator.Iterator: pass)
Could not format the given code:
Cannot parse: 1:310: def list_models(dataset: typing.Union[google.cloud.bigquery.dataset.Dataset, google.cloud.bigquery.dataset.DatasetReference, google.cloud.bigquery.dataset.DatasetListItem, str], max_results: typing.Optional[int] = None, page_token: typing.Optional[str] = None, retry: google.api_core.retry.retry_unary.Retry = , timeout: typing.Optional[float] = None, page_size: typing.Optional[int] = None) -> google.api_core.page_iterator.Iterator: pass)
Could not format the given code:
Cannot parse: 1:217: def list_partitions(table: typing.Union[google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, str], retry: google.api_core.retry.retry_unary.Retry = , timeout: typing.Optional[float] = None) -> typing.Sequence[str]: pass)
Could not format the given code:
Cannot parse: 1:150: def list_projects(max_results: typing.Optional[int] = None, page_token: typing.Optional[str] = None, retry: google.api_core.retry.retry_unary.Retry = , timeout: typing.Optional[float] = None, page_size: typing.Optional[int] = None) -> google.api_core.page_iterator.Iterator: pass)
Could not format the given code:
Cannot parse: 1:312: def list_routines(dataset: typing.Union[google.cloud.bigquery.dataset.Dataset, google.cloud.bigquery.dataset.DatasetReference, google.cloud.bigquery.dataset.DatasetListItem, str], max_results: typing.Optional[int] = None, page_token: typing.Optional[str] = None, retry: google.api_core.retry.retry_unary.Retry = , timeout: typing.Optional[float] = None, page_size: typing.Optional[int] = None) -> google.api_core.page_iterator.Iterator: pass)
Could not format the given code:
Cannot parse: 1:476: def list_rows(table: typing.Union[google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableListItem, google.cloud.bigquery.table.TableReference, str], selected_fields: typing.Optional[typing.Sequence[google.cloud.bigquery.schema.SchemaField]] = None, max_results: typing.Optional[int] = None, page_token: typing.Optional[str] = None, start_index: typing.Optional[int] = None, page_size: typing.Optional[int] = None, retry: google.api_core.retry.retry_unary.Retry = , timeout: typing.Optional[float] = None) -> google.cloud.bigquery.table.RowIterator: pass)
Could not format the given code:
Cannot parse: 1:310: def list_tables(dataset: typing.Union[google.cloud.bigquery.dataset.Dataset, google.cloud.bigquery.dataset.DatasetReference, google.cloud.bigquery.dataset.DatasetListItem, str], max_results: typing.Optional[int] = None, page_token: typing.Optional[str] = None, retry: google.api_core.retry.retry_unary.Retry = , timeout: typing.Optional[float] = None, page_size: typing.Optional[int] = None) -> google.api_core.page_iterator.Iterator: pass)
Could not parse argument information for dataframe.
Could not format the given code:
Cannot parse: 1:521: def load_table_from_uri(source_uris: typing.Union[str, typing.Sequence[str]], destination: typing.Union[google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, str], job_id: typing.Optional[str] = None, job_id_prefix: typing.Optional[str] = None, location: typing.Optional[str] = None, project: typing.Optional[str] = None, job_config: typing.Optional[google.cloud.bigquery.job.load.LoadJobConfig] = None, retry: google.api_core.retry.retry_unary.Retry = , timeout: typing.Optional[float] = None) -> google.cloud.bigquery.job.load.LoadJob: pass)
Skip inspecting for property: google.cloud.bigquery.client.Client.location
Could not format the given code:
Cannot parse: 1:313: def query(query: str, job_config: typing.Optional[google.cloud.bigquery.job.query.QueryJobConfig] = None, job_id: typing.Optional[str] = None, job_id_prefix: typing.Optional[str] = None, location: typing.Optional[str] = None, project: typing.Optional[str] = None, retry: google.api_core.retry.retry_unary.Retry = , timeout: typing.Optional[float] = None, job_retry: google.api_core.retry.retry_unary.Retry = , api_method: typing.Union[str, google.cloud.bigquery.enums.QueryApiMethod] = QueryApiMethod.INSERT) -> google.cloud.bigquery.job.query.QueryJob: pass)
Could not format the given code:
Cannot parse: 1:284: def query_and_wait(query, *, job_config: typing.Optional[google.cloud.bigquery.job.query.QueryJobConfig] = None, location: typing.Optional[str] = None, project: typing.Optional[str] = None, api_timeout: typing.Optional[float] = None, wait_timeout: typing.Union[float, None, object] = , retry: google.api_core.retry.retry_unary.Retry = , job_retry: google.api_core.retry.retry_unary.Retry = , page_size: typing.Optional[int] = None, max_results: typing.Optional[int] = None) -> google.cloud.bigquery.table.RowIterator: pass)
Can't get argspec for : google.cloud.bigquery.client.Client.query_and_wait. 'NoneType' object is not iterable
Could not parse argument information for file_or_path.
Can't get argspec for : google.cloud.bigquery.client.Client.schema_from_json. 'NoneType' object is not iterable
Could not parse argument information for destination.
Can't get argspec for : google.cloud.bigquery.client.Client.schema_to_json. 'NoneType' object is not iterable
Could not format the given code:
Cannot parse: 1:293: def set_iam_policy(table: typing.Union[google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, str], policy: google.api_core.iam.Policy, updateMask: typing.Optional[str] = None, retry: google.api_core.retry.retry_unary.Retry = , timeout: typing.Optional[float] = None, *, fields: typing.Sequence[str] = ()) -> google.api_core.iam.Policy: pass)
Could not format the given code:
Cannot parse: 1:146: def update_dataset(dataset: google.cloud.bigquery.dataset.Dataset, fields: typing.Sequence[str], retry: google.api_core.retry.retry_unary.Retry = , timeout: typing.Optional[float] = None) -> google.cloud.bigquery.dataset.Dataset: pass)
Could not format the given code:
Cannot parse: 1:138: def update_model(model: google.cloud.bigquery.model.Model, fields: typing.Sequence[str], retry: google.api_core.retry.retry_unary.Retry = , timeout: typing.Optional[float] = None) -> google.cloud.bigquery.model.Model: pass)
Could not format the given code:
Cannot parse: 1:154: def update_routine(routine: google.cloud.bigquery.routine.routine.Routine, fields: typing.Sequence[str], retry: google.api_core.retry.retry_unary.Retry = , timeout: typing.Optional[float] = None) -> google.cloud.bigquery.routine.routine.Routine: pass)
Could not format the given code:
Cannot parse: 1:138: def update_table(table: google.cloud.bigquery.table.Table, fields: typing.Sequence[str], retry: google.api_core.retry.retry_unary.Retry = , timeout: typing.Optional[float] = None) -> google.cloud.bigquery.table.Table: pass)
Can't get argspec for : google.cloud.bigquery.client.Project. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.client.Project.from_api_repr. 'NoneType' object is not iterable
Can't inspect type : google.cloud.bigquery.job.Compression.DEFLATE
Can't inspect type : google.cloud.bigquery.job.Compression.GZIP
Can't inspect type : google.cloud.bigquery.job.Compression.NONE
Can't inspect type : google.cloud.bigquery.job.Compression.SNAPPY
Can't inspect type : google.cloud.bigquery.job.Compression.ZSTD
Can't get argspec for : google.cloud.bigquery.job.CopyJob.add_done_callback. 'NoneType' object is not iterable
Could not format the given code:
Cannot parse: 1:90: def cancel(client=None, retry: typing.Optional[google.api_core.retry.retry_unary.Retry] = , timeout: typing.Optional[float] = None) -> bool: pass)
Can't get argspec for : google.cloud.bigquery.job.CopyJob.cancelled. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.job.CopyJob.configuration
Skip inspecting for property: google.cloud.bigquery.job.CopyJob.create_disposition
Skip inspecting for property: google.cloud.bigquery.job.CopyJob.created
Skip inspecting for property: google.cloud.bigquery.job.CopyJob.destination
Skip inspecting for property: google.cloud.bigquery.job.CopyJob.destination_encryption_configuration
Could not format the given code:
Cannot parse: 1:58: def done(retry: google.api_core.retry.retry_unary.Retry = , timeout: typing.Optional[float] = 128, reload: bool = True) -> bool: pass)
Could not parse argument information for retry.
Skip inspecting for property: google.cloud.bigquery.job.CopyJob.ended
Skip inspecting for property: google.cloud.bigquery.job.CopyJob.error_result
Skip inspecting for property: google.cloud.bigquery.job.CopyJob.errors
Skip inspecting for property: google.cloud.bigquery.job.CopyJob.etag
Could not format the given code:
Cannot parse: 1:22: def exception(timeout=): pass)
Could not format the given code:
Cannot parse: 1:73: def exists(client=None, retry: google.api_core.retry.retry_unary.Retry = , timeout: typing.Optional[float] = None) -> bool: pass)
Could not parse argument information for retry.
Can't get argspec for : google.cloud.bigquery.job.CopyJob.from_api_repr. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.job.CopyJob.job_id
Skip inspecting for property: google.cloud.bigquery.job.CopyJob.job_type
Skip inspecting for property: google.cloud.bigquery.job.CopyJob.labels
Skip inspecting for property: google.cloud.bigquery.job.CopyJob.location
Skip inspecting for property: google.cloud.bigquery.job.CopyJob.num_child_jobs
Skip inspecting for property: google.cloud.bigquery.job.CopyJob.parent_job_id
Skip inspecting for property: google.cloud.bigquery.job.CopyJob.path
Skip inspecting for property: google.cloud.bigquery.job.CopyJob.project
Could not format the given code:
Cannot parse: 1:73: def reload(client=None, retry: google.api_core.retry.retry_unary.Retry = , timeout: typing.Optional[float] = 128): pass)
Could not parse argument information for retry.
Skip inspecting for property: google.cloud.bigquery.job.CopyJob.reservation_usage
Could not format the given code:
Cannot parse: 1:77: def result(retry: typing.Optional[google.api_core.retry.retry_unary.Retry] = , timeout: typing.Optional[float] = None) -> google.cloud.bigquery.job.base._AsyncJob: pass)
Can't get argspec for : google.cloud.bigquery.job.CopyJob.running. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.job.CopyJob.script_statistics
Skip inspecting for property: google.cloud.bigquery.job.CopyJob.self_link
Skip inspecting for property: google.cloud.bigquery.job.CopyJob.session_info
Can't get argspec for : google.cloud.bigquery.job.CopyJob.set_exception. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.job.CopyJob.set_result. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.job.CopyJob.sources
Skip inspecting for property: google.cloud.bigquery.job.CopyJob.started
Skip inspecting for property: google.cloud.bigquery.job.CopyJob.state
Can't get argspec for : google.cloud.bigquery.job.CopyJob.to_api_repr. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.job.CopyJob.transaction_info
Skip inspecting for property: google.cloud.bigquery.job.CopyJob.user_email
Skip inspecting for property: google.cloud.bigquery.job.CopyJob.write_disposition
Can't get argspec for : google.cloud.bigquery.job.CopyJobConfig. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.job.CopyJobConfig.__setattr__. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.job.CopyJobConfig.create_disposition
Skip inspecting for property: google.cloud.bigquery.job.CopyJobConfig.destination_encryption_configuration
Skip inspecting for property: google.cloud.bigquery.job.CopyJobConfig.destination_expiration_time
Can't get argspec for : google.cloud.bigquery.job.CopyJobConfig.from_api_repr. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.job.CopyJobConfig.job_timeout_ms
Skip inspecting for property: google.cloud.bigquery.job.CopyJobConfig.labels
Skip inspecting for property: google.cloud.bigquery.job.CopyJobConfig.operation_type
Can't get argspec for : google.cloud.bigquery.job.CopyJobConfig.to_api_repr. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.job.CopyJobConfig.write_disposition
Can't get argspec for : google.cloud.bigquery.job.CreateDisposition. 'NoneType' object is not iterable
Can't inspect type : google.cloud.bigquery.job.CreateDisposition.CREATE_IF_NEEDED
Can't inspect type : google.cloud.bigquery.job.CreateDisposition.CREATE_NEVER
Can't get argspec for : google.cloud.bigquery.job.DestinationFormat. 'NoneType' object is not iterable
Can't inspect type : google.cloud.bigquery.job.DestinationFormat.AVRO
Can't inspect type : google.cloud.bigquery.job.DestinationFormat.CSV
Can't inspect type : google.cloud.bigquery.job.DestinationFormat.NEWLINE_DELIMITED_JSON
Can't inspect type : google.cloud.bigquery.job.DestinationFormat.PARQUET
Can't get argspec for : google.cloud.bigquery.job.DmlStats.count. 'NoneType' object is not iterable
Can't inspect type : google.cloud.bigquery.job.DmlStats.count
Can't inspect type : google.cloud.bigquery.job.DmlStats.deleted_row_count
Can't inspect type : google.cloud.bigquery.job.DmlStats.index
Can't inspect type : google.cloud.bigquery.job.DmlStats.inserted_row_count
Can't inspect type : google.cloud.bigquery.job.DmlStats.updated_row_count
Can't get argspec for : google.cloud.bigquery.job.Encoding. 'NoneType' object is not iterable
Can't inspect type : google.cloud.bigquery.job.Encoding.ISO_8859_1
Can't inspect type : google.cloud.bigquery.job.Encoding.UTF_8
Can't get argspec for : google.cloud.bigquery.job.ExtractJob.add_done_callback. 'NoneType' object is not iterable
Could not format the given code:
Cannot parse: 1:90: def cancel(client=None, retry: typing.Optional[google.api_core.retry.retry_unary.Retry] = , timeout: typing.Optional[float] = None) -> bool: pass)
Can't get argspec for : google.cloud.bigquery.job.ExtractJob.cancelled. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.job.ExtractJob.compression
Skip inspecting for property: google.cloud.bigquery.job.ExtractJob.configuration
Skip inspecting for property: google.cloud.bigquery.job.ExtractJob.created
Skip inspecting for property: google.cloud.bigquery.job.ExtractJob.destination_format
Skip inspecting for property: google.cloud.bigquery.job.ExtractJob.destination_uri_file_counts
Skip inspecting for property: google.cloud.bigquery.job.ExtractJob.destination_uris
Could not format the given code:
Cannot parse: 1:58: def done(retry: google.api_core.retry.retry_unary.Retry = , timeout: typing.Optional[float] = 128, reload: bool = True) -> bool: pass)
Could not parse argument information for retry.
Skip inspecting for property: google.cloud.bigquery.job.ExtractJob.ended
Skip inspecting for property: google.cloud.bigquery.job.ExtractJob.error_result
Skip inspecting for property: google.cloud.bigquery.job.ExtractJob.errors
Skip inspecting for property: google.cloud.bigquery.job.ExtractJob.etag
Could not format the given code:
Cannot parse: 1:22: def exception(timeout=): pass)
Could not format the given code:
Cannot parse: 1:73: def exists(client=None, retry: google.api_core.retry.retry_unary.Retry = , timeout: typing.Optional[float] = None) -> bool: pass)
Could not parse argument information for retry.
Skip inspecting for property: google.cloud.bigquery.job.ExtractJob.field_delimiter
Can't get argspec for : google.cloud.bigquery.job.ExtractJob.from_api_repr. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.job.ExtractJob.job_id
Skip inspecting for property: google.cloud.bigquery.job.ExtractJob.job_type
Skip inspecting for property: google.cloud.bigquery.job.ExtractJob.labels
Skip inspecting for property: google.cloud.bigquery.job.ExtractJob.location
Skip inspecting for property: google.cloud.bigquery.job.ExtractJob.num_child_jobs
Skip inspecting for property: google.cloud.bigquery.job.ExtractJob.parent_job_id
Skip inspecting for property: google.cloud.bigquery.job.ExtractJob.path
Skip inspecting for property: google.cloud.bigquery.job.ExtractJob.print_header
Skip inspecting for property: google.cloud.bigquery.job.ExtractJob.project
Could not format the given code:
Cannot parse: 1:73: def reload(client=None, retry: google.api_core.retry.retry_unary.Retry = , timeout: typing.Optional[float] = 128): pass)
Could not parse argument information for retry.
Skip inspecting for property: google.cloud.bigquery.job.ExtractJob.reservation_usage
Could not format the given code:
Cannot parse: 1:77: def result(retry: typing.Optional[google.api_core.retry.retry_unary.Retry] = , timeout: typing.Optional[float] = None) -> google.cloud.bigquery.job.base._AsyncJob: pass)
Can't get argspec for : google.cloud.bigquery.job.ExtractJob.running. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.job.ExtractJob.script_statistics
Skip inspecting for property: google.cloud.bigquery.job.ExtractJob.self_link
Skip inspecting for property: google.cloud.bigquery.job.ExtractJob.session_info
Can't get argspec for : google.cloud.bigquery.job.ExtractJob.set_exception. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.job.ExtractJob.set_result. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.job.ExtractJob.source
Skip inspecting for property: google.cloud.bigquery.job.ExtractJob.started
Skip inspecting for property: google.cloud.bigquery.job.ExtractJob.state
Can't get argspec for : google.cloud.bigquery.job.ExtractJob.to_api_repr. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.job.ExtractJob.transaction_info
Skip inspecting for property: google.cloud.bigquery.job.ExtractJob.user_email
Can't get argspec for : google.cloud.bigquery.job.ExtractJobConfig. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.job.ExtractJobConfig.__setattr__. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.job.ExtractJobConfig.compression
Skip inspecting for property: google.cloud.bigquery.job.ExtractJobConfig.destination_format
Skip inspecting for property: google.cloud.bigquery.job.ExtractJobConfig.field_delimiter
Can't get argspec for : google.cloud.bigquery.job.ExtractJobConfig.from_api_repr. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.job.ExtractJobConfig.job_timeout_ms
Skip inspecting for property: google.cloud.bigquery.job.ExtractJobConfig.labels
Skip inspecting for property: google.cloud.bigquery.job.ExtractJobConfig.print_header
Can't get argspec for : google.cloud.bigquery.job.ExtractJobConfig.to_api_repr. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.job.ExtractJobConfig.use_avro_logical_types
Can't get argspec for : google.cloud.bigquery.job.LoadJob.add_done_callback. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.job.LoadJob.allow_jagged_rows
Skip inspecting for property: google.cloud.bigquery.job.LoadJob.allow_quoted_newlines
Skip inspecting for property: google.cloud.bigquery.job.LoadJob.autodetect
Could not format the given code:
Cannot parse: 1:90: def cancel(client=None, retry: typing.Optional[google.api_core.retry.retry_unary.Retry] = , timeout: typing.Optional[float] = None) -> bool: pass)
Can't get argspec for : google.cloud.bigquery.job.LoadJob.cancelled. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.job.LoadJob.clustering_fields
Skip inspecting for property: google.cloud.bigquery.job.LoadJob.configuration
Skip inspecting for property: google.cloud.bigquery.job.LoadJob.connection_properties
Skip inspecting for property: google.cloud.bigquery.job.LoadJob.create_disposition
Skip inspecting for property: google.cloud.bigquery.job.LoadJob.create_session
Skip inspecting for property: google.cloud.bigquery.job.LoadJob.created
Skip inspecting for property: google.cloud.bigquery.job.LoadJob.destination
Skip inspecting for property: google.cloud.bigquery.job.LoadJob.destination_encryption_configuration
Skip inspecting for property: google.cloud.bigquery.job.LoadJob.destination_table_description
Skip inspecting for property: google.cloud.bigquery.job.LoadJob.destination_table_friendly_name
Could not format the given code:
Cannot parse: 1:58: def done(retry: google.api_core.retry.retry_unary.Retry = , timeout: typing.Optional[float] = 128, reload: bool = True) -> bool: pass)
Could not parse argument information for retry.
Skip inspecting for property: google.cloud.bigquery.job.LoadJob.encoding
Skip inspecting for property: google.cloud.bigquery.job.LoadJob.ended
Skip inspecting for property: google.cloud.bigquery.job.LoadJob.error_result
Skip inspecting for property: google.cloud.bigquery.job.LoadJob.errors
Skip inspecting for property: google.cloud.bigquery.job.LoadJob.etag
Could not format the given code:
Cannot parse: 1:22: def exception(timeout=): pass)
Could not format the given code:
Cannot parse: 1:73: def exists(client=None, retry: google.api_core.retry.retry_unary.Retry = , timeout: typing.Optional[float] = None) -> bool: pass)
Could not parse argument information for retry.
Skip inspecting for property: google.cloud.bigquery.job.LoadJob.field_delimiter
Can't get argspec for : google.cloud.bigquery.job.LoadJob.from_api_repr. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.job.LoadJob.ignore_unknown_values
Skip inspecting for property: google.cloud.bigquery.job.LoadJob.input_file_bytes
Skip inspecting for property: google.cloud.bigquery.job.LoadJob.input_files
Skip inspecting for property: google.cloud.bigquery.job.LoadJob.job_id
Skip inspecting for property: google.cloud.bigquery.job.LoadJob.job_type
Skip inspecting for property: google.cloud.bigquery.job.LoadJob.labels
Skip inspecting for property: google.cloud.bigquery.job.LoadJob.location
Skip inspecting for property: google.cloud.bigquery.job.LoadJob.max_bad_records
Skip inspecting for property: google.cloud.bigquery.job.LoadJob.null_marker
Skip inspecting for property: google.cloud.bigquery.job.LoadJob.num_child_jobs
Skip inspecting for property: google.cloud.bigquery.job.LoadJob.output_bytes
Skip inspecting for property: google.cloud.bigquery.job.LoadJob.output_rows
Skip inspecting for property: google.cloud.bigquery.job.LoadJob.parent_job_id
Skip inspecting for property: google.cloud.bigquery.job.LoadJob.path
Skip inspecting for property: google.cloud.bigquery.job.LoadJob.project
Skip inspecting for property: google.cloud.bigquery.job.LoadJob.quote_character
Skip inspecting for property: google.cloud.bigquery.job.LoadJob.range_partitioning
Skip inspecting for property: google.cloud.bigquery.job.LoadJob.reference_file_schema_uri
Could not format the given code:
Cannot parse: 1:73: def reload(client=None, retry: google.api_core.retry.retry_unary.Retry = , timeout: typing.Optional[float] = 128): pass)
Could not parse argument information for retry.
Skip inspecting for property: google.cloud.bigquery.job.LoadJob.reservation_usage
Could not format the given code:
Cannot parse: 1:77: def result(retry: typing.Optional[google.api_core.retry.retry_unary.Retry] = , timeout: typing.Optional[float] = None) -> google.cloud.bigquery.job.base._AsyncJob: pass)
Can't get argspec for : google.cloud.bigquery.job.LoadJob.running. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.job.LoadJob.schema
Skip inspecting for property: google.cloud.bigquery.job.LoadJob.schema_update_options
Skip inspecting for property: google.cloud.bigquery.job.LoadJob.script_statistics
Skip inspecting for property: google.cloud.bigquery.job.LoadJob.self_link
Skip inspecting for property: google.cloud.bigquery.job.LoadJob.session_info
Can't get argspec for : google.cloud.bigquery.job.LoadJob.set_exception. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.job.LoadJob.set_result. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.job.LoadJob.skip_leading_rows
Skip inspecting for property: google.cloud.bigquery.job.LoadJob.source_format
Skip inspecting for property: google.cloud.bigquery.job.LoadJob.source_uris
Skip inspecting for property: google.cloud.bigquery.job.LoadJob.started
Skip inspecting for property: google.cloud.bigquery.job.LoadJob.state
Skip inspecting for property: google.cloud.bigquery.job.LoadJob.time_partitioning
Can't get argspec for : google.cloud.bigquery.job.LoadJob.to_api_repr. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.job.LoadJob.transaction_info
Skip inspecting for property: google.cloud.bigquery.job.LoadJob.use_avro_logical_types
Skip inspecting for property: google.cloud.bigquery.job.LoadJob.user_email
Skip inspecting for property: google.cloud.bigquery.job.LoadJob.write_disposition
Can't get argspec for : google.cloud.bigquery.job.LoadJobConfig. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.job.LoadJobConfig.__setattr__. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.job.LoadJobConfig.allow_jagged_rows
Skip inspecting for property: google.cloud.bigquery.job.LoadJobConfig.allow_quoted_newlines
Skip inspecting for property: google.cloud.bigquery.job.LoadJobConfig.autodetect
Skip inspecting for property: google.cloud.bigquery.job.LoadJobConfig.clustering_fields
Skip inspecting for property: google.cloud.bigquery.job.LoadJobConfig.column_name_character_map
Skip inspecting for property: google.cloud.bigquery.job.LoadJobConfig.connection_properties
Skip inspecting for property: google.cloud.bigquery.job.LoadJobConfig.create_disposition
Skip inspecting for property: google.cloud.bigquery.job.LoadJobConfig.create_session
Skip inspecting for property: google.cloud.bigquery.job.LoadJobConfig.decimal_target_types
Skip inspecting for property: google.cloud.bigquery.job.LoadJobConfig.destination_encryption_configuration
Skip inspecting for property: google.cloud.bigquery.job.LoadJobConfig.destination_table_description
Skip inspecting for property: google.cloud.bigquery.job.LoadJobConfig.destination_table_friendly_name
Skip inspecting for property: google.cloud.bigquery.job.LoadJobConfig.encoding
Skip inspecting for property: google.cloud.bigquery.job.LoadJobConfig.field_delimiter
Can't get argspec for : google.cloud.bigquery.job.LoadJobConfig.from_api_repr. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.job.LoadJobConfig.hive_partitioning
Skip inspecting for property: google.cloud.bigquery.job.LoadJobConfig.ignore_unknown_values
Skip inspecting for property: google.cloud.bigquery.job.LoadJobConfig.job_timeout_ms
Skip inspecting for property: google.cloud.bigquery.job.LoadJobConfig.json_extension
Skip inspecting for property: google.cloud.bigquery.job.LoadJobConfig.labels
Skip inspecting for property: google.cloud.bigquery.job.LoadJobConfig.max_bad_records
Skip inspecting for property: google.cloud.bigquery.job.LoadJobConfig.null_marker
Skip inspecting for property: google.cloud.bigquery.job.LoadJobConfig.parquet_options
Skip inspecting for property: google.cloud.bigquery.job.LoadJobConfig.preserve_ascii_control_characters
Skip inspecting for property: google.cloud.bigquery.job.LoadJobConfig.projection_fields
Skip inspecting for property: google.cloud.bigquery.job.LoadJobConfig.quote_character
Skip inspecting for property: google.cloud.bigquery.job.LoadJobConfig.range_partitioning
Skip inspecting for property: google.cloud.bigquery.job.LoadJobConfig.reference_file_schema_uri
Skip inspecting for property: google.cloud.bigquery.job.LoadJobConfig.schema
Skip inspecting for property: google.cloud.bigquery.job.LoadJobConfig.schema_update_options
Skip inspecting for property: google.cloud.bigquery.job.LoadJobConfig.skip_leading_rows
Skip inspecting for property: google.cloud.bigquery.job.LoadJobConfig.source_format
Skip inspecting for property: google.cloud.bigquery.job.LoadJobConfig.time_partitioning
Can't get argspec for : google.cloud.bigquery.job.LoadJobConfig.to_api_repr. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.job.LoadJobConfig.use_avro_logical_types
Skip inspecting for property: google.cloud.bigquery.job.LoadJobConfig.write_disposition
Can't get argspec for : google.cloud.bigquery.job.OperationType. 'NoneType' object is not iterable
Can't inspect type : google.cloud.bigquery.job.OperationType.CLONE
Can't inspect type : google.cloud.bigquery.job.OperationType.COPY
Can't inspect type : google.cloud.bigquery.job.OperationType.OPERATION_TYPE_UNSPECIFIED
Can't inspect type : google.cloud.bigquery.job.OperationType.RESTORE
Can't inspect type : google.cloud.bigquery.job.OperationType.SNAPSHOT
Can't get argspec for : google.cloud.bigquery.job.QueryJob.add_done_callback. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.job.QueryJob.allow_large_results
Skip inspecting for property: google.cloud.bigquery.job.QueryJob.billing_tier
Skip inspecting for property: google.cloud.bigquery.job.QueryJob.cache_hit
Could not format the given code:
Cannot parse: 1:90: def cancel(client=None, retry: typing.Optional[google.api_core.retry.retry_unary.Retry] = , timeout: typing.Optional[float] = None) -> bool: pass)
Can't get argspec for : google.cloud.bigquery.job.QueryJob.cancelled. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.job.QueryJob.clustering_fields
Skip inspecting for property: google.cloud.bigquery.job.QueryJob.configuration
Skip inspecting for property: google.cloud.bigquery.job.QueryJob.connection_properties
Skip inspecting for property: google.cloud.bigquery.job.QueryJob.create_disposition
Skip inspecting for property: google.cloud.bigquery.job.QueryJob.create_session
Skip inspecting for property: google.cloud.bigquery.job.QueryJob.created
Skip inspecting for property: google.cloud.bigquery.job.QueryJob.ddl_operation_performed
Skip inspecting for property: google.cloud.bigquery.job.QueryJob.ddl_target_routine
Skip inspecting for property: google.cloud.bigquery.job.QueryJob.ddl_target_table
Skip inspecting for property: google.cloud.bigquery.job.QueryJob.default_dataset
Skip inspecting for property: google.cloud.bigquery.job.QueryJob.destination
Skip inspecting for property: google.cloud.bigquery.job.QueryJob.destination_encryption_configuration
Could not format the given code:
Cannot parse: 1:58: def done(retry: google.api_core.retry.retry_unary.Retry = , timeout: typing.Optional[float] = 128, reload: bool = True) -> bool: pass)
Could not parse argument information for retry.
Skip inspecting for property: google.cloud.bigquery.job.QueryJob.dry_run
Skip inspecting for property: google.cloud.bigquery.job.QueryJob.ended
Skip inspecting for property: google.cloud.bigquery.job.QueryJob.error_result
Skip inspecting for property: google.cloud.bigquery.job.QueryJob.errors
Skip inspecting for property: google.cloud.bigquery.job.QueryJob.estimated_bytes_processed
Skip inspecting for property: google.cloud.bigquery.job.QueryJob.etag
Could not format the given code:
Cannot parse: 1:22: def exception(timeout=): pass)
Could not format the given code:
Cannot parse: 1:73: def exists(client=None, retry: google.api_core.retry.retry_unary.Retry = , timeout: typing.Optional[float] = None) -> bool: pass)
Could not parse argument information for retry.
Skip inspecting for property: google.cloud.bigquery.job.QueryJob.flatten_results
Could not parse argument information for client.
Can't get argspec for : google.cloud.bigquery.job.QueryJob.from_api_repr. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.job.QueryJob.job_id
Skip inspecting for property: google.cloud.bigquery.job.QueryJob.job_type
Skip inspecting for property: google.cloud.bigquery.job.QueryJob.labels
Skip inspecting for property: google.cloud.bigquery.job.QueryJob.location
Skip inspecting for property: google.cloud.bigquery.job.QueryJob.maximum_billing_tier
Skip inspecting for property: google.cloud.bigquery.job.QueryJob.maximum_bytes_billed
Skip inspecting for property: google.cloud.bigquery.job.QueryJob.num_child_jobs
Skip inspecting for property: google.cloud.bigquery.job.QueryJob.num_dml_affected_rows
Skip inspecting for property: google.cloud.bigquery.job.QueryJob.parent_job_id
Skip inspecting for property: google.cloud.bigquery.job.QueryJob.path
Skip inspecting for property: google.cloud.bigquery.job.QueryJob.priority
Skip inspecting for property: google.cloud.bigquery.job.QueryJob.project
Skip inspecting for property: google.cloud.bigquery.job.QueryJob.query
Skip inspecting for property: google.cloud.bigquery.job.QueryJob.query_id
Skip inspecting for property: google.cloud.bigquery.job.QueryJob.query_parameters
Skip inspecting for property: google.cloud.bigquery.job.QueryJob.query_plan
Skip inspecting for property: google.cloud.bigquery.job.QueryJob.range_partitioning
Skip inspecting for property: google.cloud.bigquery.job.QueryJob.referenced_tables
Could not format the given code:
Cannot parse: 1:73: def reload(client=None, retry: google.api_core.retry.retry_unary.Retry = , timeout: typing.Optional[float] = 128): pass)
Could not parse argument information for retry.
Skip inspecting for property: google.cloud.bigquery.job.QueryJob.reservation_usage
Could not format the given code:
Cannot parse: 1:159: def result(page_size: typing.Optional[int] = None, max_results: typing.Optional[int] = None, retry: typing.Optional[google.api_core.retry.retry_unary.Retry] = , timeout: typing.Optional[typing.Union[float, object]] = , start_index: typing.Optional[int] = None, job_retry: typing.Optional[google.api_core.retry.retry_unary.Retry] = ) -> typing.Union[RowIterator, google.cloud.bigquery.table._EmptyRowIterator]: pass)
Can't get argspec for : google.cloud.bigquery.job.QueryJob.running. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.job.QueryJob.schema
Skip inspecting for property: google.cloud.bigquery.job.QueryJob.schema_update_options
Skip inspecting for property: google.cloud.bigquery.job.QueryJob.script_statistics
Skip inspecting for property: google.cloud.bigquery.job.QueryJob.search_stats
Skip inspecting for property: google.cloud.bigquery.job.QueryJob.self_link
Skip inspecting for property: google.cloud.bigquery.job.QueryJob.session_info
Can't get argspec for : google.cloud.bigquery.job.QueryJob.set_exception. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.job.QueryJob.set_result. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.job.QueryJob.slot_millis
Skip inspecting for property: google.cloud.bigquery.job.QueryJob.started
Skip inspecting for property: google.cloud.bigquery.job.QueryJob.state
Skip inspecting for property: google.cloud.bigquery.job.QueryJob.statement_type
Skip inspecting for property: google.cloud.bigquery.job.QueryJob.table_definitions
Skip inspecting for property: google.cloud.bigquery.job.QueryJob.time_partitioning
Skip inspecting for property: google.cloud.bigquery.job.QueryJob.timeline
Can't get argspec for : google.cloud.bigquery.job.QueryJob.to_api_repr. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.job.QueryJob.total_bytes_billed
Skip inspecting for property: google.cloud.bigquery.job.QueryJob.total_bytes_processed
Skip inspecting for property: google.cloud.bigquery.job.QueryJob.transaction_info
Skip inspecting for property: google.cloud.bigquery.job.QueryJob.udf_resources
Skip inspecting for property: google.cloud.bigquery.job.QueryJob.undeclared_query_parameters
Skip inspecting for property: google.cloud.bigquery.job.QueryJob.use_legacy_sql
Skip inspecting for property: google.cloud.bigquery.job.QueryJob.use_query_cache
Skip inspecting for property: google.cloud.bigquery.job.QueryJob.user_email
Skip inspecting for property: google.cloud.bigquery.job.QueryJob.write_disposition
Can't get argspec for : google.cloud.bigquery.job.QueryJobConfig. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.job.QueryJobConfig.__setattr__. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.job.QueryJobConfig.allow_large_results
Skip inspecting for property: google.cloud.bigquery.job.QueryJobConfig.clustering_fields
Skip inspecting for property: google.cloud.bigquery.job.QueryJobConfig.connection_properties
Skip inspecting for property: google.cloud.bigquery.job.QueryJobConfig.create_disposition
Skip inspecting for property: google.cloud.bigquery.job.QueryJobConfig.create_session
Skip inspecting for property: google.cloud.bigquery.job.QueryJobConfig.default_dataset
Skip inspecting for property: google.cloud.bigquery.job.QueryJobConfig.destination
Skip inspecting for property: google.cloud.bigquery.job.QueryJobConfig.destination_encryption_configuration
Skip inspecting for property: google.cloud.bigquery.job.QueryJobConfig.dry_run
Skip inspecting for property: google.cloud.bigquery.job.QueryJobConfig.flatten_results
Can't get argspec for : google.cloud.bigquery.job.QueryJobConfig.from_api_repr. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.job.QueryJobConfig.job_timeout_ms
Skip inspecting for property: google.cloud.bigquery.job.QueryJobConfig.labels
Skip inspecting for property: google.cloud.bigquery.job.QueryJobConfig.maximum_billing_tier
Skip inspecting for property: google.cloud.bigquery.job.QueryJobConfig.maximum_bytes_billed
Skip inspecting for property: google.cloud.bigquery.job.QueryJobConfig.priority
Skip inspecting for property: google.cloud.bigquery.job.QueryJobConfig.query_parameters
Skip inspecting for property: google.cloud.bigquery.job.QueryJobConfig.range_partitioning
Skip inspecting for property: google.cloud.bigquery.job.QueryJobConfig.schema_update_options
Skip inspecting for property: google.cloud.bigquery.job.QueryJobConfig.script_options
Skip inspecting for property: google.cloud.bigquery.job.QueryJobConfig.table_definitions
Skip inspecting for property: google.cloud.bigquery.job.QueryJobConfig.time_partitioning
Can't get argspec for : google.cloud.bigquery.job.QueryJobConfig.to_api_repr. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.job.QueryJobConfig.udf_resources
Skip inspecting for property: google.cloud.bigquery.job.QueryJobConfig.use_legacy_sql
Skip inspecting for property: google.cloud.bigquery.job.QueryJobConfig.use_query_cache
Skip inspecting for property: google.cloud.bigquery.job.QueryJobConfig.write_disposition
Can't get argspec for : google.cloud.bigquery.job.QueryPlanEntry. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.job.QueryPlanEntry.completed_parallel_inputs
Skip inspecting for property: google.cloud.bigquery.job.QueryPlanEntry.compute_ms_avg
Skip inspecting for property: google.cloud.bigquery.job.QueryPlanEntry.compute_ms_max
Skip inspecting for property: google.cloud.bigquery.job.QueryPlanEntry.compute_ratio_avg
Skip inspecting for property: google.cloud.bigquery.job.QueryPlanEntry.compute_ratio_max
Skip inspecting for property: google.cloud.bigquery.job.QueryPlanEntry.end
Skip inspecting for property: google.cloud.bigquery.job.QueryPlanEntry.entry_id
Can't get argspec for : google.cloud.bigquery.job.QueryPlanEntry.from_api_repr. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.job.QueryPlanEntry.input_stages
Skip inspecting for property: google.cloud.bigquery.job.QueryPlanEntry.name
Skip inspecting for property: google.cloud.bigquery.job.QueryPlanEntry.parallel_inputs
Skip inspecting for property: google.cloud.bigquery.job.QueryPlanEntry.read_ms_avg
Skip inspecting for property: google.cloud.bigquery.job.QueryPlanEntry.read_ms_max
Skip inspecting for property: google.cloud.bigquery.job.QueryPlanEntry.read_ratio_avg
Skip inspecting for property: google.cloud.bigquery.job.QueryPlanEntry.read_ratio_max
Skip inspecting for property: google.cloud.bigquery.job.QueryPlanEntry.records_read
Skip inspecting for property: google.cloud.bigquery.job.QueryPlanEntry.records_written
Skip inspecting for property: google.cloud.bigquery.job.QueryPlanEntry.shuffle_output_bytes
Skip inspecting for property: google.cloud.bigquery.job.QueryPlanEntry.shuffle_output_bytes_spilled
Skip inspecting for property: google.cloud.bigquery.job.QueryPlanEntry.slot_ms
Skip inspecting for property: google.cloud.bigquery.job.QueryPlanEntry.start
Skip inspecting for property: google.cloud.bigquery.job.QueryPlanEntry.status
Skip inspecting for property: google.cloud.bigquery.job.QueryPlanEntry.steps
Skip inspecting for property: google.cloud.bigquery.job.QueryPlanEntry.wait_ms_avg
Skip inspecting for property: google.cloud.bigquery.job.QueryPlanEntry.wait_ms_max
Skip inspecting for property: google.cloud.bigquery.job.QueryPlanEntry.wait_ratio_avg
Skip inspecting for property: google.cloud.bigquery.job.QueryPlanEntry.wait_ratio_max
Skip inspecting for property: google.cloud.bigquery.job.QueryPlanEntry.write_ms_avg
Skip inspecting for property: google.cloud.bigquery.job.QueryPlanEntry.write_ms_max
Skip inspecting for property: google.cloud.bigquery.job.QueryPlanEntry.write_ratio_avg
Skip inspecting for property: google.cloud.bigquery.job.QueryPlanEntry.write_ratio_max
Can't get argspec for : google.cloud.bigquery.job.QueryPlanEntryStep. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.job.QueryPlanEntryStep.from_api_repr. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.job.QueryPriority. 'NoneType' object is not iterable
Can't inspect type : google.cloud.bigquery.job.QueryPriority.BATCH
Can't inspect type : google.cloud.bigquery.job.QueryPriority.INTERACTIVE
Can't get argspec for : google.cloud.bigquery.job.ReservationUsage. 'NoneType' object is not iterable
Can't inspect type : google.cloud.bigquery.job.ReservationUsage
Can't get argspec for : google.cloud.bigquery.job.ReservationUsage. 'NoneType' object is not iterable
Can't inspect type : google.cloud.bigquery.job.ReservationUsage
Can't get argspec for : google.cloud.bigquery.job.ReservationUsage.count. 'NoneType' object is not iterable
Can't inspect type : google.cloud.bigquery.job.ReservationUsage.count
Can't inspect type : google.cloud.bigquery.job.ReservationUsage.index
Can't inspect type : google.cloud.bigquery.job.ReservationUsage.name
Can't inspect type : google.cloud.bigquery.job.ReservationUsage.slot_ms
Can't get argspec for : google.cloud.bigquery.job.SchemaUpdateOption. 'NoneType' object is not iterable
Can't inspect type : google.cloud.bigquery.job.SchemaUpdateOption.ALLOW_FIELD_ADDITION
Can't inspect type : google.cloud.bigquery.job.SchemaUpdateOption.ALLOW_FIELD_RELAXATION
Can't get argspec for : google.cloud.bigquery.job.ScriptOptions.from_api_repr. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.job.ScriptOptions.key_result_statement
Skip inspecting for property: google.cloud.bigquery.job.ScriptOptions.statement_byte_budget
Skip inspecting for property: google.cloud.bigquery.job.ScriptOptions.statement_timeout_ms
Can't get argspec for : google.cloud.bigquery.job.ScriptOptions.to_api_repr. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.job.ScriptStackFrame. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.job.ScriptStackFrame.end_column
Skip inspecting for property: google.cloud.bigquery.job.ScriptStackFrame.end_line
Skip inspecting for property: google.cloud.bigquery.job.ScriptStackFrame.procedure_id
Skip inspecting for property: google.cloud.bigquery.job.ScriptStackFrame.start_column
Skip inspecting for property: google.cloud.bigquery.job.ScriptStackFrame.start_line
Skip inspecting for property: google.cloud.bigquery.job.ScriptStackFrame.text
Can't get argspec for : google.cloud.bigquery.job.ScriptStatistics. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.job.ScriptStatistics.evaluation_kind
Skip inspecting for property: google.cloud.bigquery.job.ScriptStatistics.stack_frames
Can't get argspec for : google.cloud.bigquery.job.SourceFormat. 'NoneType' object is not iterable
Can't inspect type : google.cloud.bigquery.job.SourceFormat.AVRO
Can't inspect type : google.cloud.bigquery.job.SourceFormat.CSV
Can't inspect type : google.cloud.bigquery.job.SourceFormat.DATASTORE_BACKUP
Can't inspect type : google.cloud.bigquery.job.SourceFormat.NEWLINE_DELIMITED_JSON
Can't inspect type : google.cloud.bigquery.job.SourceFormat.ORC
Can't inspect type : google.cloud.bigquery.job.SourceFormat.PARQUET
Can't get argspec for : google.cloud.bigquery.job.TimelineEntry. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.job.TimelineEntry.active_units
Skip inspecting for property: google.cloud.bigquery.job.TimelineEntry.completed_units
Skip inspecting for property: google.cloud.bigquery.job.TimelineEntry.elapsed_ms
Can't get argspec for : google.cloud.bigquery.job.TimelineEntry.from_api_repr. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.job.TimelineEntry.pending_units
Skip inspecting for property: google.cloud.bigquery.job.TimelineEntry.slot_millis
Can't get argspec for : google.cloud.bigquery.job.TransactionInfo. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.job.TransactionInfo. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.job.TransactionInfo.count. 'NoneType' object is not iterable
Can't inspect type : google.cloud.bigquery.job.TransactionInfo.count
Can't inspect type : google.cloud.bigquery.job.TransactionInfo.index
Can't inspect type : google.cloud.bigquery.job.TransactionInfo.transaction_id
Can't get argspec for : google.cloud.bigquery.job.UnknownJob. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.job.UnknownJob.add_done_callback. 'NoneType' object is not iterable
Could not format the given code:
Cannot parse: 1:90: def cancel(client=None, retry: typing.Optional[google.api_core.retry.retry_unary.Retry] = , timeout: typing.Optional[float] = None) -> bool: pass)
Can't get argspec for : google.cloud.bigquery.job.UnknownJob.cancelled. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.job.UnknownJob.configuration
Skip inspecting for property: google.cloud.bigquery.job.UnknownJob.created
Could not format the given code:
Cannot parse: 1:58: def done(retry: google.api_core.retry.retry_unary.Retry = , timeout: typing.Optional[float] = 128, reload: bool = True) -> bool: pass)
Could not parse argument information for retry.
Skip inspecting for property: google.cloud.bigquery.job.UnknownJob.ended
Skip inspecting for property: google.cloud.bigquery.job.UnknownJob.error_result
Skip inspecting for property: google.cloud.bigquery.job.UnknownJob.errors
Skip inspecting for property: google.cloud.bigquery.job.UnknownJob.etag
Could not format the given code:
Cannot parse: 1:22: def exception(timeout=): pass)
Could not format the given code:
Cannot parse: 1:73: def exists(client=None, retry: google.api_core.retry.retry_unary.Retry = , timeout: typing.Optional[float] = None) -> bool: pass)
Could not parse argument information for retry.
Can't get argspec for : google.cloud.bigquery.job.UnknownJob.from_api_repr. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.job.UnknownJob.job_id
Skip inspecting for property: google.cloud.bigquery.job.UnknownJob.job_type
Skip inspecting for property: google.cloud.bigquery.job.UnknownJob.labels
Skip inspecting for property: google.cloud.bigquery.job.UnknownJob.location
Skip inspecting for property: google.cloud.bigquery.job.UnknownJob.num_child_jobs
Skip inspecting for property: google.cloud.bigquery.job.UnknownJob.parent_job_id
Skip inspecting for property: google.cloud.bigquery.job.UnknownJob.path
Skip inspecting for property: google.cloud.bigquery.job.UnknownJob.project
Could not format the given code:
Cannot parse: 1:73: def reload(client=None, retry: google.api_core.retry.retry_unary.Retry = , timeout: typing.Optional[float] = 128): pass)
Could not parse argument information for retry.
Skip inspecting for property: google.cloud.bigquery.job.UnknownJob.reservation_usage
Could not format the given code:
Cannot parse: 1:77: def result(retry: typing.Optional[google.api_core.retry.retry_unary.Retry] = , timeout: typing.Optional[float] = None) -> google.cloud.bigquery.job.base._AsyncJob: pass)
Can't get argspec for : google.cloud.bigquery.job.UnknownJob.running. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.job.UnknownJob.script_statistics
Skip inspecting for property: google.cloud.bigquery.job.UnknownJob.self_link
Skip inspecting for property: google.cloud.bigquery.job.UnknownJob.session_info
Can't get argspec for : google.cloud.bigquery.job.UnknownJob.set_exception. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.job.UnknownJob.set_result. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.job.UnknownJob.started
Skip inspecting for property: google.cloud.bigquery.job.UnknownJob.state
Can't get argspec for : google.cloud.bigquery.job.UnknownJob.to_api_repr. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.job.UnknownJob.transaction_info
Skip inspecting for property: google.cloud.bigquery.job.UnknownJob.user_email
Can't get argspec for : google.cloud.bigquery.job.WriteDisposition. 'NoneType' object is not iterable
Can't inspect type : google.cloud.bigquery.job.WriteDisposition.WRITE_APPEND
Can't inspect type : google.cloud.bigquery.job.WriteDisposition.WRITE_EMPTY
Can't inspect type : google.cloud.bigquery.job.WriteDisposition.WRITE_TRUNCATE
Skip inspecting for property: google.cloud.bigquery.dataset.AccessEntry.dataset
Skip inspecting for property: google.cloud.bigquery.dataset.AccessEntry.dataset_target_types
Skip inspecting for property: google.cloud.bigquery.dataset.AccessEntry.domain
Skip inspecting for property: google.cloud.bigquery.dataset.AccessEntry.entity_id
Skip inspecting for property: google.cloud.bigquery.dataset.AccessEntry.entity_type
Can't get argspec for : google.cloud.bigquery.dataset.AccessEntry.from_api_repr. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.dataset.AccessEntry.group_by_email
Skip inspecting for property: google.cloud.bigquery.dataset.AccessEntry.role
Skip inspecting for property: google.cloud.bigquery.dataset.AccessEntry.routine
Skip inspecting for property: google.cloud.bigquery.dataset.AccessEntry.special_group
Can't get argspec for : google.cloud.bigquery.dataset.AccessEntry.to_api_repr. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.dataset.AccessEntry.user_by_email
Skip inspecting for property: google.cloud.bigquery.dataset.AccessEntry.view
Can't get argspec for : google.cloud.bigquery.dataset.Dataset. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.dataset.Dataset.access_entries
Skip inspecting for property: google.cloud.bigquery.dataset.Dataset.created
Skip inspecting for property: google.cloud.bigquery.dataset.Dataset.dataset_id
Skip inspecting for property: google.cloud.bigquery.dataset.Dataset.default_encryption_configuration
Skip inspecting for property: google.cloud.bigquery.dataset.Dataset.default_partition_expiration_ms
Skip inspecting for property: google.cloud.bigquery.dataset.Dataset.default_rounding_mode
Skip inspecting for property: google.cloud.bigquery.dataset.Dataset.default_table_expiration_ms
Skip inspecting for property: google.cloud.bigquery.dataset.Dataset.description
Skip inspecting for property: google.cloud.bigquery.dataset.Dataset.etag
Skip inspecting for property: google.cloud.bigquery.dataset.Dataset.friendly_name
Can't get argspec for : google.cloud.bigquery.dataset.Dataset.from_api_repr. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.dataset.Dataset.from_string. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.dataset.Dataset.full_dataset_id
Skip inspecting for property: google.cloud.bigquery.dataset.Dataset.is_case_insensitive
Skip inspecting for property: google.cloud.bigquery.dataset.Dataset.labels
Skip inspecting for property: google.cloud.bigquery.dataset.Dataset.location
Skip inspecting for property: google.cloud.bigquery.dataset.Dataset.max_time_travel_hours
Can't get argspec for : google.cloud.bigquery.dataset.Dataset.model. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.dataset.Dataset.modified
Skip inspecting for property: google.cloud.bigquery.dataset.Dataset.path
Skip inspecting for property: google.cloud.bigquery.dataset.Dataset.project
Skip inspecting for property: google.cloud.bigquery.dataset.Dataset.reference
Can't get argspec for : google.cloud.bigquery.dataset.Dataset.routine. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.dataset.Dataset.self_link
Skip inspecting for property: google.cloud.bigquery.dataset.Dataset.storage_billing_model
Can't get argspec for : google.cloud.bigquery.dataset.Dataset.table. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.dataset.Dataset.to_api_repr. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.dataset.DatasetListItem. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.dataset.DatasetListItem.dataset_id
Skip inspecting for property: google.cloud.bigquery.dataset.DatasetListItem.friendly_name
Skip inspecting for property: google.cloud.bigquery.dataset.DatasetListItem.full_dataset_id
Skip inspecting for property: google.cloud.bigquery.dataset.DatasetListItem.labels
Can't get argspec for : google.cloud.bigquery.dataset.DatasetListItem.model. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.dataset.DatasetListItem.project
Skip inspecting for property: google.cloud.bigquery.dataset.DatasetListItem.reference
Can't get argspec for : google.cloud.bigquery.dataset.DatasetListItem.routine. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.dataset.DatasetListItem.table. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.dataset.DatasetReference. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.dataset.DatasetReference.dataset_id
Can't get argspec for : google.cloud.bigquery.dataset.DatasetReference.from_api_repr. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.dataset.DatasetReference.model. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.dataset.DatasetReference.path
Skip inspecting for property: google.cloud.bigquery.dataset.DatasetReference.project
Can't get argspec for : google.cloud.bigquery.dataset.DatasetReference.routine. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.dataset.DatasetReference.table. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.dataset.DatasetReference.to_api_repr. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.table.CloneDefinition. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.table.ColumnReference. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.table.ForeignKey. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.table.ForeignKey.from_api_repr. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.table.PartitionRange.end
Skip inspecting for property: google.cloud.bigquery.table.PartitionRange.interval
Skip inspecting for property: google.cloud.bigquery.table.PartitionRange.start
Can't get argspec for : google.cloud.bigquery.table.PrimaryKey. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.table.RangePartitioning.field
Skip inspecting for property: google.cloud.bigquery.table.RangePartitioning.range_
Can't get argspec for : google.cloud.bigquery.table.Row. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.table.Row.items. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.table.Row.keys. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.table.Row.values. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.table.RowIterator.__iter__. 'NoneType' object is not iterable
Can't inspect type : google.cloud.bigquery.table.RowIterator.client
Can't inspect type : google.cloud.bigquery.table.RowIterator.item_to_value
Skip inspecting for property: google.cloud.bigquery.table.RowIterator.job_id
Skip inspecting for property: google.cloud.bigquery.table.RowIterator.location
Can't inspect type : google.cloud.bigquery.table.RowIterator.max_results
Can't inspect type : google.cloud.bigquery.table.RowIterator.next_page_token
Skip inspecting for property: google.cloud.bigquery.table.RowIterator.num_dml_affected_rows
Can't inspect type : google.cloud.bigquery.table.RowIterator.num_results
Can't inspect type : google.cloud.bigquery.table.RowIterator.page_number
Skip inspecting for property: google.cloud.bigquery.table.RowIterator.pages
Skip inspecting for property: google.cloud.bigquery.table.RowIterator.project
Skip inspecting for property: google.cloud.bigquery.table.RowIterator.query_id
Skip inspecting for property: google.cloud.bigquery.table.RowIterator.schema
Could not format the given code:
Cannot parse: 1:123: def to_arrow_iterable(bqstorage_client: typing.Optional[bigquery_storage.BigQueryReadClient] = None, max_queue_size: int = ) -> typing.Iterator[pyarrow.RecordBatch]: pass)
Could not format the given code:
Cannot parse: 1:189: def to_dataframe_iterable(bqstorage_client: typing.Optional[bigquery_storage.BigQueryReadClient] = None, dtypes: typing.Optional[typing.Dict[str, typing.Any]] = None, max_queue_size: int = ) -> pandas.DataFrame: pass)
Skip inspecting for property: google.cloud.bigquery.table.RowIterator.total_rows
Can't get argspec for : google.cloud.bigquery.table.SnapshotDefinition. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.table.StreamingBuffer. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.table.Table.clone_definition
Skip inspecting for property: google.cloud.bigquery.table.Table.clustering_fields
Skip inspecting for property: google.cloud.bigquery.table.Table.created
Skip inspecting for property: google.cloud.bigquery.table.Table.dataset_id
Skip inspecting for property: google.cloud.bigquery.table.Table.description
Skip inspecting for property: google.cloud.bigquery.table.Table.encryption_configuration
Skip inspecting for property: google.cloud.bigquery.table.Table.etag
Skip inspecting for property: google.cloud.bigquery.table.Table.expires
Skip inspecting for property: google.cloud.bigquery.table.Table.external_data_configuration
Skip inspecting for property: google.cloud.bigquery.table.Table.friendly_name
Can't get argspec for : google.cloud.bigquery.table.Table.from_api_repr. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.table.Table.from_string. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.table.Table.full_table_id
Skip inspecting for property: google.cloud.bigquery.table.Table.labels
Skip inspecting for property: google.cloud.bigquery.table.Table.location
Skip inspecting for property: google.cloud.bigquery.table.Table.modified
Skip inspecting for property: google.cloud.bigquery.table.Table.mview_enable_refresh
Skip inspecting for property: google.cloud.bigquery.table.Table.mview_last_refresh_time
Skip inspecting for property: google.cloud.bigquery.table.Table.mview_query
Skip inspecting for property: google.cloud.bigquery.table.Table.mview_refresh_interval
Skip inspecting for property: google.cloud.bigquery.table.Table.num_bytes
Skip inspecting for property: google.cloud.bigquery.table.Table.num_rows
Skip inspecting for property: google.cloud.bigquery.table.Table.partition_expiration
Skip inspecting for property: google.cloud.bigquery.table.Table.partitioning_type
Skip inspecting for property: google.cloud.bigquery.table.Table.path
Skip inspecting for property: google.cloud.bigquery.table.Table.project
Skip inspecting for property: google.cloud.bigquery.table.Table.range_partitioning
Skip inspecting for property: google.cloud.bigquery.table.Table.reference
Skip inspecting for property: google.cloud.bigquery.table.Table.require_partition_filter
Skip inspecting for property: google.cloud.bigquery.table.Table.schema
Skip inspecting for property: google.cloud.bigquery.table.Table.self_link
Skip inspecting for property: google.cloud.bigquery.table.Table.snapshot_definition
Skip inspecting for property: google.cloud.bigquery.table.Table.streaming_buffer
Skip inspecting for property: google.cloud.bigquery.table.Table.table_constraints
Skip inspecting for property: google.cloud.bigquery.table.Table.table_id
Skip inspecting for property: google.cloud.bigquery.table.Table.table_type
Skip inspecting for property: google.cloud.bigquery.table.Table.time_partitioning
Can't get argspec for : google.cloud.bigquery.table.Table.to_api_repr. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.table.Table.to_bqstorage. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.table.Table.view_query
Skip inspecting for property: google.cloud.bigquery.table.Table.view_use_legacy_sql
Can't get argspec for : google.cloud.bigquery.table.TableConstraints. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.table.TableConstraints.from_api_repr. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.table.TableListItem. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.table.TableListItem.clustering_fields
Skip inspecting for property: google.cloud.bigquery.table.TableListItem.created
Skip inspecting for property: google.cloud.bigquery.table.TableListItem.dataset_id
Skip inspecting for property: google.cloud.bigquery.table.TableListItem.expires
Skip inspecting for property: google.cloud.bigquery.table.TableListItem.friendly_name
Can't get argspec for : google.cloud.bigquery.table.TableListItem.from_string. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.table.TableListItem.full_table_id
Skip inspecting for property: google.cloud.bigquery.table.TableListItem.labels
Skip inspecting for property: google.cloud.bigquery.table.TableListItem.partition_expiration
Skip inspecting for property: google.cloud.bigquery.table.TableListItem.partitioning_type
Skip inspecting for property: google.cloud.bigquery.table.TableListItem.path
Skip inspecting for property: google.cloud.bigquery.table.TableListItem.project
Skip inspecting for property: google.cloud.bigquery.table.TableListItem.reference
Skip inspecting for property: google.cloud.bigquery.table.TableListItem.table_id
Skip inspecting for property: google.cloud.bigquery.table.TableListItem.table_type
Skip inspecting for property: google.cloud.bigquery.table.TableListItem.time_partitioning
Can't get argspec for : google.cloud.bigquery.table.TableListItem.to_api_repr. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.table.TableListItem.to_bqstorage. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.table.TableListItem.view_use_legacy_sql
Could not parse argument information for dataset_ref.
Can't get argspec for : google.cloud.bigquery.table.TableReference. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.table.TableReference.dataset_id
Can't get argspec for : google.cloud.bigquery.table.TableReference.from_api_repr. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.table.TableReference.path
Skip inspecting for property: google.cloud.bigquery.table.TableReference.project
Skip inspecting for property: google.cloud.bigquery.table.TableReference.table_id
Can't get argspec for : google.cloud.bigquery.table.TableReference.to_api_repr. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.table.TableReference.to_bqstorage. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.table.TimePartitioning.expiration_ms
Skip inspecting for property: google.cloud.bigquery.table.TimePartitioning.field
Can't get argspec for : google.cloud.bigquery.table.TimePartitioning.from_api_repr. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.table.TimePartitioning.require_partition_filter
Can't get argspec for : google.cloud.bigquery.table.TimePartitioning.to_api_repr. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.table.TimePartitioning.type_
Can't get argspec for : google.cloud.bigquery.table.TimePartitioningType. 'NoneType' object is not iterable
Can't inspect type : google.cloud.bigquery.table.TimePartitioningType.DAY
Can't inspect type : google.cloud.bigquery.table.TimePartitioningType.HOUR
Can't inspect type : google.cloud.bigquery.table.TimePartitioningType.MONTH
Can't inspect type : google.cloud.bigquery.table.TimePartitioningType.YEAR
Could not parse argument information for model_ref.
Can't get argspec for : google.cloud.bigquery.model.Model. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.model.Model.best_trial_id
Skip inspecting for property: google.cloud.bigquery.model.Model.created
Skip inspecting for property: google.cloud.bigquery.model.Model.dataset_id
Skip inspecting for property: google.cloud.bigquery.model.Model.description
Skip inspecting for property: google.cloud.bigquery.model.Model.encryption_configuration
Skip inspecting for property: google.cloud.bigquery.model.Model.etag
Skip inspecting for property: google.cloud.bigquery.model.Model.expires
Skip inspecting for property: google.cloud.bigquery.model.Model.feature_columns
Skip inspecting for property: google.cloud.bigquery.model.Model.friendly_name
Could not parse argument information for resource.
Can't get argspec for : google.cloud.bigquery.model.Model.from_api_repr. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.model.Model.label_columns
Skip inspecting for property: google.cloud.bigquery.model.Model.labels
Skip inspecting for property: google.cloud.bigquery.model.Model.location
Skip inspecting for property: google.cloud.bigquery.model.Model.model_id
Skip inspecting for property: google.cloud.bigquery.model.Model.model_type
Skip inspecting for property: google.cloud.bigquery.model.Model.modified
Skip inspecting for property: google.cloud.bigquery.model.Model.path
Skip inspecting for property: google.cloud.bigquery.model.Model.project
Skip inspecting for property: google.cloud.bigquery.model.Model.reference
Can't get argspec for : google.cloud.bigquery.model.Model.to_api_repr. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.model.Model.training_runs
Skip inspecting for property: google.cloud.bigquery.model.Model.transform_columns
Can't get argspec for : google.cloud.bigquery.model.ModelReference. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.model.ModelReference.dataset_id
Could not parse argument information for resource.
Can't get argspec for : google.cloud.bigquery.model.ModelReference.from_api_repr. 'NoneType' object is not iterable
Could not parse argument information for model_id.
Could not parse argument information for default_project.
Skip inspecting for property: google.cloud.bigquery.model.ModelReference.model_id
Skip inspecting for property: google.cloud.bigquery.model.ModelReference.path
Skip inspecting for property: google.cloud.bigquery.model.ModelReference.project
Can't get argspec for : google.cloud.bigquery.model.ModelReference.to_api_repr. 'NoneType' object is not iterable
Could not parse argument information for resource.
Can't get argspec for : google.cloud.bigquery.model.TransformColumn. 'NoneType' object is not iterable
Could not parse argument information for resource.
Can't get argspec for : google.cloud.bigquery.model.TransformColumn.from_api_repr. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.model.TransformColumn.name
Skip inspecting for property: google.cloud.bigquery.model.TransformColumn.transform_sql
Skip inspecting for property: google.cloud.bigquery.model.TransformColumn.type_
Can't get argspec for : google.cloud.bigquery.routine.DeterminismLevel. 'NoneType' object is not iterable
Can't inspect type : google.cloud.bigquery.routine.DeterminismLevel.DETERMINISM_LEVEL_UNSPECIFIED
Can't inspect type : google.cloud.bigquery.routine.DeterminismLevel.DETERMINISTIC
Can't inspect type : google.cloud.bigquery.routine.DeterminismLevel.NOT_DETERMINISTIC
Skip inspecting for property: google.cloud.bigquery.routine.RemoteFunctionOptions.connection
Skip inspecting for property: google.cloud.bigquery.routine.RemoteFunctionOptions.endpoint
Can't get argspec for : google.cloud.bigquery.routine.RemoteFunctionOptions.from_api_repr. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.routine.RemoteFunctionOptions.max_batching_rows
Can't get argspec for : google.cloud.bigquery.routine.RemoteFunctionOptions.to_api_repr. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.routine.RemoteFunctionOptions.user_defined_context
Can't get argspec for : google.cloud.bigquery.routine.Routine. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.routine.Routine.arguments
Skip inspecting for property: google.cloud.bigquery.routine.Routine.body
Skip inspecting for property: google.cloud.bigquery.routine.Routine.created
Skip inspecting for property: google.cloud.bigquery.routine.Routine.data_governance_type
Skip inspecting for property: google.cloud.bigquery.routine.Routine.dataset_id
Skip inspecting for property: google.cloud.bigquery.routine.Routine.description
Skip inspecting for property: google.cloud.bigquery.routine.Routine.determinism_level
Skip inspecting for property: google.cloud.bigquery.routine.Routine.etag
Can't get argspec for : google.cloud.bigquery.routine.Routine.from_api_repr. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.routine.Routine.imported_libraries
Skip inspecting for property: google.cloud.bigquery.routine.Routine.language
Skip inspecting for property: google.cloud.bigquery.routine.Routine.modified
Skip inspecting for property: google.cloud.bigquery.routine.Routine.path
Skip inspecting for property: google.cloud.bigquery.routine.Routine.project
Skip inspecting for property: google.cloud.bigquery.routine.Routine.reference
Skip inspecting for property: google.cloud.bigquery.routine.Routine.remote_function_options
Skip inspecting for property: google.cloud.bigquery.routine.Routine.return_table_type
Skip inspecting for property: google.cloud.bigquery.routine.Routine.return_type
Skip inspecting for property: google.cloud.bigquery.routine.Routine.routine_id
Can't get argspec for : google.cloud.bigquery.routine.Routine.to_api_repr. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.routine.Routine.type_
Can't get argspec for : google.cloud.bigquery.routine.RoutineArgument. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.routine.RoutineArgument.data_type
Can't get argspec for : google.cloud.bigquery.routine.RoutineArgument.from_api_repr. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.routine.RoutineArgument.kind
Skip inspecting for property: google.cloud.bigquery.routine.RoutineArgument.mode
Skip inspecting for property: google.cloud.bigquery.routine.RoutineArgument.name
Can't get argspec for : google.cloud.bigquery.routine.RoutineArgument.to_api_repr. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.routine.RoutineReference. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.routine.RoutineReference.__eq__. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.routine.RoutineReference.__str__. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.routine.RoutineReference.dataset_id
Can't get argspec for : google.cloud.bigquery.routine.RoutineReference.from_api_repr. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.routine.RoutineReference.path
Skip inspecting for property: google.cloud.bigquery.routine.RoutineReference.project
Skip inspecting for property: google.cloud.bigquery.routine.RoutineReference.routine_id
Can't get argspec for : google.cloud.bigquery.routine.RoutineReference.to_api_repr. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.routine.RoutineType. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.schema.FieldElementType. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.schema.FieldElementType.from_api_repr. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.schema.FieldElementType.to_api_repr. 'NoneType' object is not iterable
Unknown Type: data
Can't get argspec for : google.cloud.bigquery.schema.PolicyTagList.from_api_repr. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.schema.PolicyTagList.names
Can't get argspec for : google.cloud.bigquery.schema.PolicyTagList.to_api_repr. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.schema.SchemaField.default_value_expression
Skip inspecting for property: google.cloud.bigquery.schema.SchemaField.description
Skip inspecting for property: google.cloud.bigquery.schema.SchemaField.field_type
Skip inspecting for property: google.cloud.bigquery.schema.SchemaField.fields
Can't get argspec for : google.cloud.bigquery.schema.SchemaField.from_api_repr. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.schema.SchemaField.is_nullable
Skip inspecting for property: google.cloud.bigquery.schema.SchemaField.max_length
Skip inspecting for property: google.cloud.bigquery.schema.SchemaField.mode
Skip inspecting for property: google.cloud.bigquery.schema.SchemaField.name
Skip inspecting for property: google.cloud.bigquery.schema.SchemaField.policy_tags
Skip inspecting for property: google.cloud.bigquery.schema.SchemaField.precision
Skip inspecting for property: google.cloud.bigquery.schema.SchemaField.range_element_type
Skip inspecting for property: google.cloud.bigquery.schema.SchemaField.scale
Can't get argspec for : google.cloud.bigquery.schema.SchemaField.to_api_repr. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.schema.SchemaField.to_standard_sql. 'NoneType' object is not iterable
Unknown Type: data
Unknown Type: data
Unknown Type: data
Unknown Type: data
Unknown Type: data
Can't get argspec for : google.cloud.bigquery.external_config.BigtableColumn. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.external_config.BigtableColumn.encoding
Skip inspecting for property: google.cloud.bigquery.external_config.BigtableColumn.field_name
Can't get argspec for : google.cloud.bigquery.external_config.BigtableColumn.from_api_repr. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.external_config.BigtableColumn.only_read_latest
Skip inspecting for property: google.cloud.bigquery.external_config.BigtableColumn.qualifier_encoded
Skip inspecting for property: google.cloud.bigquery.external_config.BigtableColumn.qualifier_string
Can't get argspec for : google.cloud.bigquery.external_config.BigtableColumn.to_api_repr. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.external_config.BigtableColumn.type_
Can't get argspec for : google.cloud.bigquery.external_config.BigtableColumnFamily. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.external_config.BigtableColumnFamily.columns
Skip inspecting for property: google.cloud.bigquery.external_config.BigtableColumnFamily.encoding
Skip inspecting for property: google.cloud.bigquery.external_config.BigtableColumnFamily.family_id
Can't get argspec for : google.cloud.bigquery.external_config.BigtableColumnFamily.from_api_repr. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.external_config.BigtableColumnFamily.only_read_latest
Can't get argspec for : google.cloud.bigquery.external_config.BigtableColumnFamily.to_api_repr. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.external_config.BigtableColumnFamily.type_
Can't get argspec for : google.cloud.bigquery.external_config.BigtableOptions. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.external_config.BigtableOptions.column_families
Can't get argspec for : google.cloud.bigquery.external_config.BigtableOptions.from_api_repr. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.external_config.BigtableOptions.ignore_unspecified_column_families
Skip inspecting for property: google.cloud.bigquery.external_config.BigtableOptions.read_rowkey_as_string
Can't get argspec for : google.cloud.bigquery.external_config.BigtableOptions.to_api_repr. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.external_config.CSVOptions. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.external_config.CSVOptions.allow_jagged_rows
Skip inspecting for property: google.cloud.bigquery.external_config.CSVOptions.allow_quoted_newlines
Skip inspecting for property: google.cloud.bigquery.external_config.CSVOptions.encoding
Skip inspecting for property: google.cloud.bigquery.external_config.CSVOptions.field_delimiter
Can't get argspec for : google.cloud.bigquery.external_config.CSVOptions.from_api_repr. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.external_config.CSVOptions.preserve_ascii_control_characters
Skip inspecting for property: google.cloud.bigquery.external_config.CSVOptions.quote_character
Skip inspecting for property: google.cloud.bigquery.external_config.CSVOptions.skip_leading_rows
Can't get argspec for : google.cloud.bigquery.external_config.CSVOptions.to_api_repr. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.external_config.ExternalConfig. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.external_config.ExternalConfig.autodetect
Skip inspecting for property: google.cloud.bigquery.external_config.ExternalConfig.avro_options
Skip inspecting for property: google.cloud.bigquery.external_config.ExternalConfig.bigtable_options
Skip inspecting for property: google.cloud.bigquery.external_config.ExternalConfig.compression
Skip inspecting for property: google.cloud.bigquery.external_config.ExternalConfig.connection_id
Skip inspecting for property: google.cloud.bigquery.external_config.ExternalConfig.csv_options
Skip inspecting for property: google.cloud.bigquery.external_config.ExternalConfig.decimal_target_types
Can't get argspec for : google.cloud.bigquery.external_config.ExternalConfig.from_api_repr. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.external_config.ExternalConfig.google_sheets_options
Skip inspecting for property: google.cloud.bigquery.external_config.ExternalConfig.hive_partitioning
Skip inspecting for property: google.cloud.bigquery.external_config.ExternalConfig.ignore_unknown_values
Skip inspecting for property: google.cloud.bigquery.external_config.ExternalConfig.max_bad_records
Skip inspecting for property: google.cloud.bigquery.external_config.ExternalConfig.options
Skip inspecting for property: google.cloud.bigquery.external_config.ExternalConfig.parquet_options
Skip inspecting for property: google.cloud.bigquery.external_config.ExternalConfig.reference_file_schema_uri
Skip inspecting for property: google.cloud.bigquery.external_config.ExternalConfig.schema
Skip inspecting for property: google.cloud.bigquery.external_config.ExternalConfig.source_format
Skip inspecting for property: google.cloud.bigquery.external_config.ExternalConfig.source_uris
Can't get argspec for : google.cloud.bigquery.external_config.ExternalConfig.to_api_repr. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.external_config.ExternalSourceFormat. 'NoneType' object is not iterable
Can't inspect type : google.cloud.bigquery.external_config.ExternalSourceFormat.AVRO
Can't inspect type : google.cloud.bigquery.external_config.ExternalSourceFormat.BIGTABLE
Can't inspect type : google.cloud.bigquery.external_config.ExternalSourceFormat.CSV
Can't inspect type : google.cloud.bigquery.external_config.ExternalSourceFormat.DATASTORE_BACKUP
Can't inspect type : google.cloud.bigquery.external_config.ExternalSourceFormat.GOOGLE_SHEETS
Can't inspect type : google.cloud.bigquery.external_config.ExternalSourceFormat.NEWLINE_DELIMITED_JSON
Can't inspect type : google.cloud.bigquery.external_config.ExternalSourceFormat.ORC
Can't inspect type : google.cloud.bigquery.external_config.ExternalSourceFormat.PARQUET
Can't get argspec for : google.cloud.bigquery.external_config.GoogleSheetsOptions. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.external_config.GoogleSheetsOptions.from_api_repr. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.external_config.GoogleSheetsOptions.range
Skip inspecting for property: google.cloud.bigquery.external_config.GoogleSheetsOptions.skip_leading_rows
Can't get argspec for : google.cloud.bigquery.external_config.GoogleSheetsOptions.to_api_repr. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.external_config.HivePartitioningOptions. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.external_config.HivePartitioningOptions.from_api_repr. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.external_config.HivePartitioningOptions.mode
Skip inspecting for property: google.cloud.bigquery.external_config.HivePartitioningOptions.require_partition_filter
Skip inspecting for property: google.cloud.bigquery.external_config.HivePartitioningOptions.source_uri_prefix
Can't get argspec for : google.cloud.bigquery.external_config.HivePartitioningOptions.to_api_repr. 'NoneType' object is not iterable
Can't get argspec for : google.cloud.bigquery.encryption_configuration.EncryptionConfiguration.from_api_repr. 'NoneType' object is not iterable
Skip inspecting for property: google.cloud.bigquery.encryption_configuration.EncryptionConfiguration.kms_key_name
Can't get argspec for : google.cloud.bigquery.encryption_configuration.EncryptionConfiguration.to_api_repr. 'NoneType' object is not iterable
reading sources... [ 66%] summary_overview
/tmpfs/src/github/python-bigquery/.nox/docfx/lib/python3.10/site-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document
warn("Container node skipped: type={0}".format(mdnode.t))
reading sources... [ 70%] usage/client
reading sources... [ 75%] usage/datasets
reading sources... [ 79%] usage/encryption
reading sources... [ 83%] usage/index
reading sources... [ 87%] usage/jobs
reading sources... [ 91%] usage/pandas
reading sources... [ 95%] usage/queries
reading sources... [100%] usage/tables

WARNING: autodoc: failed to import module 'magics.magics' from module 'google.cloud.bigquery'; the following exception was raised:
This module can only be loaded in IPython.
looking for now-outdated files... none found
pickling environment... done
checking consistency... done
preparing documents... done
writing output... [ 4%] README
writing output... [ 8%] UPGRADING
writing output... [ 12%] bigquery/legacy_proto_types
writing output... [ 16%] bigquery/standard_sql
writing output... [ 20%] changelog
writing output... [ 25%] dbapi
writing output... [ 29%] design/index
writing output... [ 33%] design/query-retries
writing output... [ 37%] enums
writing output... [ 41%] format_options
writing output... [ 45%] index
writing output... [ 50%] job_base
writing output... [ 54%] magics
writing output... [ 58%] query
writing output... [ 62%] reference
writing output... [ 66%] summary_overview
writing output... [ 70%] usage/client
writing output... [ 75%] usage/datasets
writing output... [ 79%] usage/encryption
writing output... [ 83%] usage/index
writing output... [ 87%] usage/jobs
writing output... [ 91%] usage/pandas
writing output... [ 95%] usage/queries
writing output... [100%] usage/tables

generating indices... genindex py-modindex done
highlighting module code... [ 3%] google.cloud.bigquery.client
highlighting module code... [ 7%] google.cloud.bigquery.dataset
highlighting module code... [ 11%] google.cloud.bigquery.dbapi.connection
highlighting module code... [ 14%] google.cloud.bigquery.dbapi.cursor
highlighting module code... [ 18%] google.cloud.bigquery.dbapi.exceptions
highlighting module code... [ 22%] google.cloud.bigquery.dbapi.types
highlighting module code... [ 25%] google.cloud.bigquery.encryption_configuration
highlighting module code... [ 29%] google.cloud.bigquery.enums
highlighting module code... [ 33%] google.cloud.bigquery.external_config
highlighting module code... [ 37%] google.cloud.bigquery.format_options
highlighting module code... [ 40%] google.cloud.bigquery.job.base
highlighting module code... [ 44%] google.cloud.bigquery.job.copy_
highlighting module code... [ 48%] google.cloud.bigquery.job.extract
highlighting module code... [ 51%] google.cloud.bigquery.job.load
highlighting module code... [ 55%] google.cloud.bigquery.job.query
highlighting module code... [ 59%] google.cloud.bigquery.model
highlighting module code... [ 62%] google.cloud.bigquery.query
highlighting module code... [ 66%] google.cloud.bigquery.routine.routine
highlighting module code... [ 70%] google.cloud.bigquery.schema
highlighting module code... [ 74%] google.cloud.bigquery.standard_sql
highlighting module code... [ 77%] google.cloud.bigquery.table
highlighting module code... [ 81%] google.cloud.bigquery_v2.types.encryption_config
highlighting module code... [ 85%] google.cloud.bigquery_v2.types.model
highlighting module code... [ 88%] google.cloud.bigquery_v2.types.model_reference
highlighting module code... [ 92%] google.cloud.bigquery_v2.types.standard_sql
highlighting module code... [ 96%] google.cloud.bigquery_v2.types.table_reference
highlighting module code... [100%] google.cloud.client

writing additional pages... search done
copying static files... done
copying extra files... done
dumping search index in English (code: en)... done
dumping object inventory... done
build succeeded, 2 warnings.

The HTML pages are in docs/_build/html.
Converted google.cloud.bigquery.dbapi.Connection into cross reference in:
google.cloud.bigquery.dbapi.Connection
Converted google.cloud.bigquery.job.QueryJobConfig into cross reference in:
Optional[google.cloud.bigquery.job.QueryJobConfig]
Converted google.cloud.bigquery.job.LoadJobConfig into cross reference in:
Optional[google.cloud.bigquery.job.LoadJobConfig]
Converted google.cloud.bigquery.job.LoadJob into cross reference in:
Union[ str, google.cloud.bigquery.job.LoadJob, google.cloud.bigquery.job.CopyJob, google.cloud.bigquery.job.ExtractJob, google.cloud.bigquery.job.QueryJob ]
Converted google.cloud.bigquery.job.CopyJob into cross reference in:
Union[ str, google.cloud.bigquery.job.LoadJob, google.cloud.bigquery.job.CopyJob, google.cloud.bigquery.job.ExtractJob, google.cloud.bigquery.job.QueryJob ]
Converted google.cloud.bigquery.job.ExtractJob into cross reference in:
Union[ str, google.cloud.bigquery.job.LoadJob, google.cloud.bigquery.job.CopyJob, google.cloud.bigquery.job.ExtractJob, google.cloud.bigquery.job.QueryJob ]
Converted google.cloud.bigquery.job.QueryJob into cross reference in:
Union[ str, google.cloud.bigquery.job.LoadJob, google.cloud.bigquery.job.CopyJob, google.cloud.bigquery.job.ExtractJob, google.cloud.bigquery.job.QueryJob ]
Converted google.cloud.bigquery.job.LoadJob into cross reference in:
Union[ google.cloud.bigquery.job.LoadJob, google.cloud.bigquery.job.CopyJob, google.cloud.bigquery.job.ExtractJob, google.cloud.bigquery.job.QueryJob, ]
Converted google.cloud.bigquery.job.CopyJob into cross reference in:
Union[ google.cloud.bigquery.job.LoadJob, google.cloud.bigquery.job.CopyJob, google.cloud.bigquery.job.ExtractJob, google.cloud.bigquery.job.QueryJob, ]
Converted google.cloud.bigquery.job.ExtractJob into cross reference in:
Union[ google.cloud.bigquery.job.LoadJob, google.cloud.bigquery.job.CopyJob, google.cloud.bigquery.job.ExtractJob, google.cloud.bigquery.job.QueryJob, ]
Converted google.cloud.bigquery.job.QueryJob into cross reference in:
Union[ google.cloud.bigquery.job.LoadJob, google.cloud.bigquery.job.CopyJob, google.cloud.bigquery.job.ExtractJob, google.cloud.bigquery.job.QueryJob, ]
Converted google.cloud.bigquery.table.Table into cross reference in:
Union[ google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, str, Sequence[ Union[ google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, str, ] ], ]
Converted google.cloud.bigquery.table.TableReference into cross reference in:
Union[ google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, str, Sequence[ Union[ google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, str, ] ], ]
Converted google.cloud.bigquery.table.TableListItem into cross reference in:
Union[ google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, str, Sequence[ Union[ google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, str, ] ], ]
Converted google.cloud.bigquery.table.Table into cross reference in:
Union[ google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, str, Sequence[ Union[ google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, str, ] ], ]
Converted google.cloud.bigquery.table.TableReference into cross reference in:
Union[ google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, str, Sequence[ Union[ google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, str, ] ], ]
Converted google.cloud.bigquery.table.TableListItem into cross reference in:
Union[ google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, str, Sequence[ Union[ google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, str, ] ], ]
Converted google.cloud.bigquery.table.Table into cross reference in:
Union[ google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, str, ]
Converted google.cloud.bigquery.table.TableReference into cross reference in:
Union[ google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, str, ]
Converted google.cloud.bigquery.table.TableListItem into cross reference in:
Union[ google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, str, ]
Converted google.cloud.bigquery.job.CopyJobConfig into cross reference in:
Optional[google.cloud.bigquery.job.CopyJobConfig]
Converted google.cloud.bigquery.job.CopyJob into cross reference in:
google.cloud.bigquery.job.CopyJob
Converted google.cloud.bigquery.dataset.Dataset into cross reference in:
Union[ google.cloud.bigquery.dataset.Dataset, google.cloud.bigquery.dataset.DatasetReference, google.cloud.bigquery.dataset.DatasetListItem, str, ]
Converted google.cloud.bigquery.dataset.DatasetReference into cross reference in:
Union[ google.cloud.bigquery.dataset.Dataset, google.cloud.bigquery.dataset.DatasetReference, google.cloud.bigquery.dataset.DatasetListItem, str, ]
Converted google.cloud.bigquery.dataset.DatasetListItem into cross reference in:
Union[ google.cloud.bigquery.dataset.Dataset, google.cloud.bigquery.dataset.DatasetReference, google.cloud.bigquery.dataset.DatasetListItem, str, ]
Converted google.cloud.bigquery.dataset.Dataset into cross reference in:
google.cloud.bigquery.dataset.Dataset
Converted google.cloud.bigquery.job.LoadJob into cross reference in:
Union[ google.cloud.bigquery.job.LoadJob, google.cloud.bigquery.job.CopyJob, google.cloud.bigquery.job.ExtractJob, google.cloud.bigquery.job.QueryJob ]
Converted google.cloud.bigquery.job.CopyJob into cross reference in:
Union[ google.cloud.bigquery.job.LoadJob, google.cloud.bigquery.job.CopyJob, google.cloud.bigquery.job.ExtractJob, google.cloud.bigquery.job.QueryJob ]
Converted google.cloud.bigquery.job.ExtractJob into cross reference in:
Union[ google.cloud.bigquery.job.LoadJob, google.cloud.bigquery.job.CopyJob, google.cloud.bigquery.job.ExtractJob, google.cloud.bigquery.job.QueryJob ]
Converted google.cloud.bigquery.job.QueryJob into cross reference in:
Union[ google.cloud.bigquery.job.LoadJob, google.cloud.bigquery.job.CopyJob, google.cloud.bigquery.job.ExtractJob, google.cloud.bigquery.job.QueryJob ]
Converted google.cloud.bigquery.routine.Routine into cross reference in:
google.cloud.bigquery.routine.Routine
Converted google.cloud.bigquery.routine.Routine into cross reference in:
google.cloud.bigquery.routine.Routine
Converted google.cloud.bigquery.table.Table into cross reference in:
Union[ google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, str, ]
Converted google.cloud.bigquery.table.TableReference into cross reference in:
Union[ google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, str, ]
Converted google.cloud.bigquery.table.TableListItem into cross reference in:
Union[ google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, str, ]
Converted google.cloud.bigquery.table.Table into cross reference in:
google.cloud.bigquery.table.Table
Converted google.cloud.bigquery.dataset.DatasetReference into cross reference in:
google.cloud.bigquery.dataset.DatasetReference
Converted google.cloud.bigquery.dataset.Dataset into cross reference in:
Union[ google.cloud.bigquery.dataset.Dataset, google.cloud.bigquery.dataset.DatasetReference, google.cloud.bigquery.dataset.DatasetListItem, str, ]
Converted google.cloud.bigquery.dataset.DatasetReference into cross reference in:
Union[ google.cloud.bigquery.dataset.Dataset, google.cloud.bigquery.dataset.DatasetReference, google.cloud.bigquery.dataset.DatasetListItem, str, ]
Converted google.cloud.bigquery.dataset.DatasetListItem into cross reference in:
Union[ google.cloud.bigquery.dataset.Dataset, google.cloud.bigquery.dataset.DatasetReference, google.cloud.bigquery.dataset.DatasetListItem, str, ]
Converted google.cloud.bigquery.model.Model into cross reference in:
Union[ google.cloud.bigquery.model.Model, google.cloud.bigquery.model.ModelReference, str, ]
Converted google.cloud.bigquery.model.ModelReference into cross reference in:
Union[ google.cloud.bigquery.model.Model, google.cloud.bigquery.model.ModelReference, str, ]
Converted google.cloud.bigquery.routine.Routine into cross reference in:
Union[ google.cloud.bigquery.routine.Routine, google.cloud.bigquery.routine.RoutineReference, str, ]
Converted google.cloud.bigquery.routine.RoutineReference into cross reference in:
Union[ google.cloud.bigquery.routine.Routine, google.cloud.bigquery.routine.RoutineReference, str, ]
Converted google.cloud.bigquery.table.Table into cross reference in:
Union[ google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, str, ]
Converted google.cloud.bigquery.table.TableReference into cross reference in:
Union[ google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, str, ]
Converted google.cloud.bigquery.table.TableListItem into cross reference in:
Union[ google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, str, ]
Converted google.cloud.bigquery.table.Table into cross reference in:
Union[ google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, google.cloud.bigquery.model.Model, google.cloud.bigquery.model.ModelReference, src, ]
Converted google.cloud.bigquery.table.TableReference into cross reference in:
Union[ google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, google.cloud.bigquery.model.Model, google.cloud.bigquery.model.ModelReference, src, ]
Converted google.cloud.bigquery.table.TableListItem into cross reference in:
Union[ google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, google.cloud.bigquery.model.Model, google.cloud.bigquery.model.ModelReference, src, ]
Converted google.cloud.bigquery.model.Model into cross reference in:
Union[ google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, google.cloud.bigquery.model.Model, google.cloud.bigquery.model.ModelReference, src, ]
Converted google.cloud.bigquery.model.ModelReference into cross reference in:
Union[ google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, google.cloud.bigquery.model.Model, google.cloud.bigquery.model.ModelReference, src, ]
Converted google.cloud.bigquery.job.ExtractJobConfig into cross reference in:
Optional[google.cloud.bigquery.job.ExtractJobConfig]
Converted google.cloud.bigquery.job.ExtractJob into cross reference in:
google.cloud.bigquery.job.ExtractJob
Converted google.cloud.bigquery.dataset.DatasetReference into cross reference in:
Union[ google.cloud.bigquery.dataset.DatasetReference, str, ]
Converted google.cloud.bigquery.dataset.Dataset into cross reference in:
google.cloud.bigquery.dataset.Dataset
Converted google.cloud.bigquery.table.Table into cross reference in:
Union[ google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, str, ]
Converted google.cloud.bigquery.table.TableReference into cross reference in:
Union[ google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, str, ]
Converted google.cloud.bigquery.table.TableListItem into cross reference in:
Union[ google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, str, ]
Converted google.cloud.bigquery.model.ModelReference into cross reference in:
Union[ google.cloud.bigquery.model.ModelReference, str, ]
Converted google.cloud.bigquery.model.Model into cross reference in:
google.cloud.bigquery.model.Model
Converted google.cloud.bigquery.routine.Routine into cross reference in:
Union[ google.cloud.bigquery.routine.Routine, google.cloud.bigquery.routine.RoutineReference, str, ]
Converted google.cloud.bigquery.routine.RoutineReference into cross reference in:
Union[ google.cloud.bigquery.routine.Routine, google.cloud.bigquery.routine.RoutineReference, str, ]
Converted google.cloud.bigquery.routine.Routine into cross reference in:
google.cloud.bigquery.routine.Routine
Converted google.cloud.bigquery.table.Table into cross reference in:
Union[ google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, str, ]
Converted google.cloud.bigquery.table.TableReference into cross reference in:
Union[ google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, str, ]
Converted google.cloud.bigquery.table.TableListItem into cross reference in:
Union[ google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, str, ]
Converted google.cloud.bigquery.table.Table into cross reference in:
google.cloud.bigquery.table.Table
Converted google.cloud.bigquery.table.Table into cross reference in:
Union[ google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, str, ]
Converted google.cloud.bigquery.table.TableReference into cross reference in:
Union[ google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, str, ]
Converted google.cloud.bigquery.schema.SchemaField into cross reference in:
Sequence[google.cloud.bigquery.schema.SchemaField]
Converted google.cloud.bigquery.table.Table into cross reference in:
Union[ google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, str, ]
Converted google.cloud.bigquery.table.TableReference into cross reference in:
Union[ google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, str, ]
Converted google.cloud.bigquery.schema.SchemaField into cross reference in:
Sequence[google.cloud.bigquery.schema.SchemaField]
Converted google.cloud.bigquery.table.Table into cross reference in:
Union[ google.cloud.bigquery.table.Table google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, str ]
Converted google.cloud.bigquery.table.TableReference into cross reference in:
Union[ google.cloud.bigquery.table.Table google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, str ]
Converted google.cloud.bigquery.table.TableListItem into cross reference in:
Union[ google.cloud.bigquery.table.Table google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, str ]
Converted google.cloud.bigquery.job into cross reference in:
Optional[Union[ google.cloud.bigquery.job._AsyncJob, str, ]]
Converted google.cloud.bigquery.dataset.Dataset into cross reference in:
Union[ google.cloud.bigquery.dataset.Dataset, google.cloud.bigquery.dataset.DatasetReference, google.cloud.bigquery.dataset.DatasetListItem, str, ]
Converted google.cloud.bigquery.dataset.DatasetReference into cross reference in:
Union[ google.cloud.bigquery.dataset.Dataset, google.cloud.bigquery.dataset.DatasetReference, google.cloud.bigquery.dataset.DatasetListItem, str, ]
Converted google.cloud.bigquery.dataset.DatasetListItem into cross reference in:
Union[ google.cloud.bigquery.dataset.Dataset, google.cloud.bigquery.dataset.DatasetReference, google.cloud.bigquery.dataset.DatasetListItem, str, ]
Converted google.cloud.bigquery.table.Table into cross reference in:
Union[ google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, str, ]
Converted google.cloud.bigquery.table.TableReference into cross reference in:
Union[ google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, str, ]
Converted google.cloud.bigquery.table.TableListItem into cross reference in:
Union[ google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, str, ]
Converted google.cloud.bigquery.dataset.Dataset into cross reference in:
Union[ google.cloud.bigquery.dataset.Dataset, google.cloud.bigquery.dataset.DatasetReference, google.cloud.bigquery.dataset.DatasetListItem, str, ]
Converted google.cloud.bigquery.dataset.DatasetReference into cross reference in:
Union[ google.cloud.bigquery.dataset.Dataset, google.cloud.bigquery.dataset.DatasetReference, google.cloud.bigquery.dataset.DatasetListItem, str, ]
Converted google.cloud.bigquery.dataset.DatasetListItem into cross reference in:
Union[ google.cloud.bigquery.dataset.Dataset, google.cloud.bigquery.dataset.DatasetReference, google.cloud.bigquery.dataset.DatasetListItem, str, ]
Converted google.cloud.bigquery.table.Table into cross reference in:
Union[ google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableListItem, google.cloud.bigquery.table.TableReference, str, ]
Converted google.cloud.bigquery.table.TableListItem into cross reference in:
Union[ google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableListItem, google.cloud.bigquery.table.TableReference, str, ]
Converted google.cloud.bigquery.table.TableReference into cross reference in:
Union[ google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableListItem, google.cloud.bigquery.table.TableReference, str, ]
Converted google.cloud.bigquery.schema.SchemaField into cross reference in:
Sequence[google.cloud.bigquery.schema.SchemaField]
Converted google.cloud.bigquery.table.RowIterator into cross reference in:
google.cloud.bigquery.table.RowIterator
Converted google.cloud.bigquery.dataset.Dataset into cross reference in:
Union[ google.cloud.bigquery.dataset.Dataset, google.cloud.bigquery.dataset.DatasetReference, google.cloud.bigquery.dataset.DatasetListItem, str, ]
Converted google.cloud.bigquery.dataset.DatasetReference into cross reference in:
Union[ google.cloud.bigquery.dataset.Dataset, google.cloud.bigquery.dataset.DatasetReference, google.cloud.bigquery.dataset.DatasetListItem, str, ]
Converted google.cloud.bigquery.dataset.DatasetListItem into cross reference in:
Union[ google.cloud.bigquery.dataset.Dataset, google.cloud.bigquery.dataset.DatasetReference, google.cloud.bigquery.dataset.DatasetListItem, str, ]
Converted google.cloud.bigquery.job.LoadJob into cross reference in:
google.cloud.bigquery.job.LoadJob
Converted google.cloud.bigquery.job.LoadJob into cross reference in:
google.cloud.bigquery.job.LoadJob
Converted google.cloud.bigquery.job.LoadJob into cross reference in:
google.cloud.bigquery.job.LoadJob
Converted google.cloud.bigquery.table.Table into cross reference in:
Union[ google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, str, ]
Converted google.cloud.bigquery.table.TableReference into cross reference in:
Union[ google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, str, ]
Converted google.cloud.bigquery.table.TableListItem into cross reference in:
Union[ google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, str, ]
Converted google.cloud.bigquery.job.LoadJobConfig into cross reference in:
Optional[google.cloud.bigquery.job.LoadJobConfig]
Converted google.cloud.bigquery.job.LoadJob into cross reference in:
google.cloud.bigquery.job.LoadJob
Converted google.cloud.bigquery.job.QueryJobConfig into cross reference in:
Optional[google.cloud.bigquery.job.QueryJobConfig]
Converted google.cloud.bigquery.job.QueryJob into cross reference in:
google.cloud.bigquery.job.QueryJob
Converted google.cloud.bigquery.job.QueryJobConfig into cross reference in:
Optional[google.cloud.bigquery.job.QueryJobConfig]
Converted google.cloud.bigquery.table.RowIterator into cross reference in:
google.cloud.bigquery.table.RowIterator
Converted google.cloud.bigquery.table.Table into cross reference in:
Union[ google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, str, ]
Converted google.cloud.bigquery.table.TableReference into cross reference in:
Union[ google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, str, ]
Converted google.cloud.bigquery.table.TableListItem into cross reference in:
Union[ google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, google.cloud.bigquery.table.TableListItem, str, ]
Converted google.cloud.bigquery.dataset.Dataset into cross reference in:
google.cloud.bigquery.dataset.Dataset
Converted google.cloud.bigquery.dataset.Dataset into cross reference in:
google.cloud.bigquery.dataset.Dataset
Converted google.cloud.bigquery.model.Model into cross reference in:
google.cloud.bigquery.model.Model
Converted google.cloud.bigquery.model.Model into cross reference in:
google.cloud.bigquery.model.Model
Converted google.cloud.bigquery.routine.Routine into cross reference in:
google.cloud.bigquery.routine.Routine
Converted google.cloud.bigquery.routine.Routine into cross reference in:
google.cloud.bigquery.routine.Routine
Converted google.cloud.bigquery.table.Table into cross reference in:
google.cloud.bigquery.table.Table
Converted google.cloud.bigquery.table.Table into cross reference in:
google.cloud.bigquery.table.Table
Converted google.cloud.bigquery.dataset.DatasetReference into cross reference in:
Union[google.cloud.bigquery.dataset.DatasetReference, str]
Converted google.cloud.bigquery.dataset.AccessEntry into cross reference in:
List[google.cloud.bigquery.dataset.AccessEntry]: Dataset's access
entries.

`role` augments the entity type and must be present **unless** the
entity type is `view` or `routine`.

Converted google.cloud.bigquery.encryption_configuration.EncryptionConfiguration into cross reference in:
google.cloud.bigquery.encryption_configuration.EncryptionConfiguration: Custom
encryption configuration for all tables in the dataset.

Custom encryption configuration (e.g., Cloud KMS keys) or :data:`None`
if using default encryption.

See `protecting data with Cloud KMS keys
`_
in the BigQuery documentation.


Converted google.cloud.bigquery.model.ModelReference into cross reference in:
google.cloud.bigquery.model.ModelReference
Converted google.cloud.bigquery.dataset.DatasetReference into cross reference in:
google.cloud.bigquery.dataset.DatasetReference: A reference to this
dataset.


Converted google.cloud.bigquery.routine.RoutineReference into cross reference in:
google.cloud.bigquery.routine.RoutineReference
Converted google.cloud.bigquery.table.TableReference into cross reference in:
google.cloud.bigquery.table.TableReference
Converted google.cloud.bigquery.model.ModelReference into cross reference in:
google.cloud.bigquery.model.ModelReference
Converted google.cloud.bigquery.dataset.DatasetReference into cross reference in:
google.cloud.bigquery.dataset.DatasetReference: A reference to this
dataset.


Converted google.cloud.bigquery.routine.RoutineReference into cross reference in:
google.cloud.bigquery.routine.RoutineReference
Converted google.cloud.bigquery.table.TableReference into cross reference in:
google.cloud.bigquery.table.TableReference
Converted google.cloud.bigquery.model.ModelReference into cross reference in:
google.cloud.bigquery.model.ModelReference
Converted google.cloud.bigquery.routine.RoutineReference into cross reference in:
google.cloud.bigquery.routine.RoutineReference
Converted google.cloud.bigquery.table.TableReference into cross reference in:
google.cloud.bigquery.table.TableReference
Converted google.cloud.bigquery.dbapi.Cursor into cross reference in:
google.cloud.bigquery.dbapi.Cursor
Converted google.cloud.bigquery.dbapi.Connection into cross reference in:
google.cloud.bigquery.dbapi.Connection
Converted google.cloud.bigquery.job.QueryJobConfig into cross reference in:
google.cloud.bigquery.job.QueryJobConfig
Converted google.cloud.bigquery.dbapi.InterfaceError into cross reference in:
google.cloud.bigquery.dbapi.InterfaceError
Converted google.cloud.bigquery.dbapi.InterfaceError into cross reference in:
google.cloud.bigquery.dbapi.InterfaceError
Converted google.cloud.bigquery.dbapi.InterfaceError into cross reference in:
google.cloud.bigquery.dbapi.InterfaceError
Converted google.cloud.bigquery.job into cross reference in:
google.cloud.bigquery.job.query.QueryJob | None: The query job
created by the last `execute*()` call, if a query job was created.


Converted google.cloud.bigquery.table into cross reference in:
google.cloud.bigquery.table.EncryptionConfiguration
Converted google.cloud.bigquery.table.TableReference into cross reference in:
List[google.cloud.bigquery.table.TableReference]
Converted google.cloud.bigquery.table.TableReference into cross reference in:
google.cloud.bigquery.table.TableReference
Converted google.cloud.bigquery.client.Client into cross reference in:
google.cloud.bigquery.client.Client
Converted google.cloud.bigquery.job.CopyJobConfig into cross reference in:
Optional[google.cloud.bigquery.job.CopyJobConfig]
Converted google.cloud.bigquery.client.Client into cross reference in:
Optional[google.cloud.bigquery.client.Client]
Converted google.cloud.bigquery.table.TableReference into cross reference in:
google.cloud.bigquery.table.TableReference: Table into which data
is to be loaded.


Converted google.cloud.bigquery.encryption_configuration.EncryptionConfiguration into cross reference in:
google.cloud.bigquery.encryption_configuration.EncryptionConfiguration: Custom
encryption configuration for the destination table.

Custom encryption configuration (e.g., Cloud KMS keys) or :data:`None`
if using default encryption.

See
destination_encryption_configuration.


Converted google.cloud.bigquery.client.Client into cross reference in:
Optional[google.cloud.bigquery.client.Client]
Converted google.cloud.bigquery.client.Client into cross reference in:
google.cloud.bigquery.client.Client
Converted google.cloud.bigquery.client.Client into cross reference in:
Optional[google.cloud.bigquery.client.Client]
Converted google.cloud.bigquery.job.ReservationUsage into cross reference in:
List[google.cloud.bigquery.job.ReservationUsage]
Converted google.cloud.bigquery.table.TableReference into cross reference in:
List[google.cloud.bigquery.table.TableReference]): Table(s) from
which data is to be loaded.


Converted google.cloud.bigquery.job.CreateDisposition into cross reference in:
google.cloud.bigquery.job.CreateDisposition: Specifies behavior
for creating tables.

See
https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#JobConfigurationTableCopy.FIELDS.create_disposition


Converted google.cloud.bigquery.encryption_configuration.EncryptionConfiguration into cross reference in:
google.cloud.bigquery.encryption_configuration.EncryptionConfiguration: Custom
encryption configuration for the destination table.

Custom encryption configuration (e.g., Cloud KMS keys) or :data:`None`
if using default encryption.

See
https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#JobConfigurationTableCopy.FIELDS.destination_encryption_configuration


Converted google.cloud.bigquery.job.WriteDisposition into cross reference in:
google.cloud.bigquery.job.WriteDisposition: Action that occurs if
the destination table already exists.

See
https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#JobConfigurationTableCopy.FIELDS.write_disposition


Converted google.cloud.bigquery.table.TableReference into cross reference in:
Union[ google.cloud.bigquery.table.TableReference, google.cloud.bigquery.model.ModelReference ]
Converted google.cloud.bigquery.model.ModelReference into cross reference in:
Union[ google.cloud.bigquery.table.TableReference, google.cloud.bigquery.model.ModelReference ]
Converted google.cloud.bigquery.client.Client into cross reference in:
google.cloud.bigquery.client.Client
Converted google.cloud.bigquery.job.ExtractJobConfig into cross reference in:
Optional[google.cloud.bigquery.job.ExtractJobConfig]
Converted google.cloud.bigquery.client.Client into cross reference in:
Optional[google.cloud.bigquery.client.Client]
Converted google.cloud.bigquery.client.Client into cross reference in:
Optional[google.cloud.bigquery.client.Client]
Converted google.cloud.bigquery.client.Client into cross reference in:
google.cloud.bigquery.client.Client
Converted google.cloud.bigquery.client.Client into cross reference in:
Optional[google.cloud.bigquery.client.Client]
Converted google.cloud.bigquery.job.ReservationUsage into cross reference in:
List[google.cloud.bigquery.job.ReservationUsage]
Converted google.cloud.bigquery.table.TableReference into cross reference in:
Union[ google.cloud.bigquery.table.TableReference, google.cloud.bigquery.model.ModelReference ]: Table or Model from which data is to be loaded or extracted.



Converted google.cloud.bigquery.model.ModelReference into cross reference in:
Union[ google.cloud.bigquery.table.TableReference, google.cloud.bigquery.model.ModelReference ]: Table or Model from which data is to be loaded or extracted.



Converted google.cloud.bigquery.job.Compression into cross reference in:
google.cloud.bigquery.job.Compression: Compression type to use for
exported files.

See
https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#JobConfigurationExtract.FIELDS.compression


Converted google.cloud.bigquery.job.DestinationFormat into cross reference in:
google.cloud.bigquery.job.DestinationFormat: Exported file format.

See
https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#JobConfigurationExtract.FIELDS.destination_format


Converted google.cloud.bigquery.table.TableReference into cross reference in:
google.cloud.bigquery.table.TableReference
Converted google.cloud.bigquery.client.Client into cross reference in:
google.cloud.bigquery.client.Client
Converted google.cloud.bigquery.client.Client into cross reference in:
Optional[google.cloud.bigquery.client.Client]
Converted google.cloud.bigquery.table.TableReference into cross reference in:
google.cloud.bigquery.table.TableReference: table where loaded rows are written

See:
https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#JobConfigurationLoad.FIELDS.destination_table


Converted google.cloud.bigquery.encryption_configuration.EncryptionConfiguration into cross reference in:
google.cloud.bigquery.encryption_configuration.EncryptionConfiguration: Custom
encryption configuration for the destination table.

Custom encryption configuration (e.g., Cloud KMS keys)
or :data:`None` if using default encryption.

See
destination_encryption_configuration.


Converted google.cloud.bigquery.client.Client into cross reference in:
Optional[google.cloud.bigquery.client.Client]
Converted google.cloud.bigquery.client.Client into cross reference in:
google.cloud.bigquery.client.Client
Converted google.cloud.bigquery.job.LoadJobConfig.reference_file_schema_uri into cross reference in:
See:
attr:`google.cloud.bigquery.job.LoadJobConfig.reference_file_schema_uri`.


Converted google.cloud.bigquery.client.Client into cross reference in:
Optional[google.cloud.bigquery.client.Client]
Converted google.cloud.bigquery.job.ReservationUsage into cross reference in:
List[google.cloud.bigquery.job.ReservationUsage]
Converted google.cloud.bigquery.job.CreateDisposition into cross reference in:
Optional[google.cloud.bigquery.job.CreateDisposition]: Specifies behavior
for creating tables.

See:
https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#JobConfigurationLoad.FIELDS.create_disposition


Converted google.cloud.bigquery.encryption_configuration.EncryptionConfiguration into cross reference in:
Optional[google.cloud.bigquery.encryption_configuration.EncryptionConfiguration]: Custom
encryption configuration for the destination table.

Custom encryption configuration (e.g., Cloud KMS keys) or :data:`None`
if using default encryption.

See:
https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#JobConfigurationLoad.FIELDS.destination_encryption_configuration


Converted google.cloud.bigquery.job.Encoding into cross reference in:
Optional[google.cloud.bigquery.job.Encoding]: The character encoding of the
data.

See:
https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#JobConfigurationLoad.FIELDS.encoding


Converted google.cloud.bigquery.format_options.ParquetOptions into cross reference in:
Optional[google.cloud.bigquery.format_options.ParquetOptions]: Additional
properties to set if `sourceFormat` is set to PARQUET.

See:
https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#JobConfigurationLoad.FIELDS.parquet_options


Converted google.cloud.bigquery.table.RangePartitioning into cross reference in:
Optional[google.cloud.bigquery.table.RangePartitioning]:
Configures range-based partitioning for destination table.


Only specify at most one of
xref_time_partitioning or
xref_range_partitioning.

Converted google.cloud.bigquery.job.SchemaUpdateOption into cross reference in:
Optional[List[google.cloud.bigquery.job.SchemaUpdateOption]]: Specifies
updates to the destination table schema to allow as a side effect of
the load job.


Converted google.cloud.bigquery.job.SourceFormat into cross reference in:
Optional[google.cloud.bigquery.job.SourceFormat]: File format of the data.

See:
https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#JobConfigurationLoad.FIELDS.source_format


Converted google.cloud.bigquery.table.TimePartitioning into cross reference in:
Optional[google.cloud.bigquery.table.TimePartitioning]: Specifies time-based
partitioning for the destination table.

Only specify at most one of
time_partitioning or
range_partitioning.


Converted google.cloud.bigquery.job.WriteDisposition into cross reference in:
Optional[google.cloud.bigquery.job.WriteDisposition]: Action that occurs if
the destination table already exists.

See:
https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#JobConfigurationLoad.FIELDS.write_disposition


Converted google.cloud.bigquery.client.Client into cross reference in:
google.cloud.bigquery.client.Client
Converted google.cloud.bigquery.job.QueryJobConfig into cross reference in:
Optional[google.cloud.bigquery.job.QueryJobConfig]
Converted google.cloud.bigquery.client.Client into cross reference in:
Optional[google.cloud.bigquery.client.Client]
Converted google.cloud.bigquery.routine.RoutineReference into cross reference in:
Optional[google.cloud.bigquery.routine.RoutineReference]: Return the DDL target routine, present
for CREATE/DROP FUNCTION/PROCEDURE queries.

See:
https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#JobStatistics2.FIELDS.ddl_target_routine


Converted google.cloud.bigquery.table.TableReference into cross reference in:
Optional[google.cloud.bigquery.table.TableReference]: Return the DDL target table, present
for CREATE/DROP TABLE/VIEW queries.

See:
https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#JobStatistics2.FIELDS.ddl_target_table


Converted google.cloud.bigquery.encryption_configuration.EncryptionConfiguration into cross reference in:
google.cloud.bigquery.encryption_configuration.EncryptionConfiguration: Custom
encryption configuration for the destination table.

Custom encryption configuration (e.g., Cloud KMS keys) or :data:`None`
if using default encryption.

See
destination_encryption_configuration.


Converted google.cloud.bigquery.client.Client into cross reference in:
Optional[google.cloud.bigquery.client.Client]
Converted google.cloud.bigquery.client.Client into cross reference in:
google.cloud.bigquery.client.Client
Converted google.cloud.bigquery.job.QueryPlanEntry into cross reference in:
List[google.cloud.bigquery.job.QueryPlanEntry]
Converted google.cloud.bigquery.client.Client into cross reference in:
Optional[google.cloud.bigquery.client.Client]
Converted google.cloud.bigquery.job.ReservationUsage into cross reference in:
List[google.cloud.bigquery.job.ReservationUsage]
Converted google.cloud.bigquery.table.RowIterator into cross reference in:
google.cloud.bigquery.table.RowIterator
Converted google.cloud.bigquery.query.ArrayQueryParameter into cross reference in:
List[Union[ google.cloud.bigquery.query.ArrayQueryParameter, google.cloud.bigquery.query.ScalarQueryParameter, google.cloud.bigquery.query.StructQueryParameter ]]
Converted google.cloud.bigquery.query.ScalarQueryParameter into cross reference in:
List[Union[ google.cloud.bigquery.query.ArrayQueryParameter, google.cloud.bigquery.query.ScalarQueryParameter, google.cloud.bigquery.query.StructQueryParameter ]]
Converted google.cloud.bigquery.query.StructQueryParameter into cross reference in:
List[Union[ google.cloud.bigquery.query.ArrayQueryParameter, google.cloud.bigquery.query.ScalarQueryParameter, google.cloud.bigquery.query.StructQueryParameter ]]
Converted google.cloud.bigquery.job.CreateDisposition into cross reference in:
google.cloud.bigquery.job.CreateDisposition: Specifies behavior
for creating tables.

See
https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#JobConfigurationQuery.FIELDS.create_disposition


Converted google.cloud.bigquery.dataset.DatasetReference into cross reference in:
google.cloud.bigquery.dataset.DatasetReference: the default dataset
to use for unqualified table names in the query or :data:`None` if not
set.

The `default_dataset` setter accepts:

- a Dataset, or
- a DatasetReference, or
- a `str` of the fully-qualified dataset ID in standard SQL
format. The value must included a project ID and dataset ID
separated by `.`. For example: `your-project.your_dataset`.

See
https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#JobConfigurationQuery.FIELDS.default_dataset


Converted google.cloud.bigquery.table.TableReference into cross reference in:
google.cloud.bigquery.table.TableReference: table where results are
written or :data:`None` if not set.

The `destination` setter accepts:

- a Table, or
- a TableReference, or
- a `str` of the fully-qualified table ID in standard SQL
format. The value must included a project ID, dataset ID, and table
ID, each separated by `.`. For example:
`your-project.your_dataset.your_table`.


Converted google.cloud.bigquery.encryption_configuration.EncryptionConfiguration into cross reference in:
google.cloud.bigquery.encryption_configuration.EncryptionConfiguration: Custom
encryption configuration for the destination table.

Custom encryption configuration (e.g., Cloud KMS keys) or :data:`None`
if using default encryption.

See
https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#JobConfigurationQuery.FIELDS.destination_encryption_configuration


Converted google.cloud.bigquery.job.QueryPriority into cross reference in:
google.cloud.bigquery.job.QueryPriority: Priority of the query.

See
https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#JobConfigurationQuery.FIELDS.priority


Converted google.cloud.bigquery.query.ArrayQueryParameter into cross reference in:
List[Union[google.cloud.bigquery.query.ArrayQueryParameter, google.cloud.bigquery.query.ScalarQueryParameter, google.cloud.bigquery.query.StructQueryParameter]]: list of parameters
for parameterized query (empty by default)

See:
https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#JobConfigurationQuery.FIELDS.query_parameters


Converted google.cloud.bigquery.query.ScalarQueryParameter into cross reference in:
List[Union[google.cloud.bigquery.query.ArrayQueryParameter, google.cloud.bigquery.query.ScalarQueryParameter, google.cloud.bigquery.query.StructQueryParameter]]: list of parameters
for parameterized query (empty by default)

See:
https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#JobConfigurationQuery.FIELDS.query_parameters


Converted google.cloud.bigquery.query.StructQueryParameter into cross reference in:
List[Union[google.cloud.bigquery.query.ArrayQueryParameter, google.cloud.bigquery.query.ScalarQueryParameter, google.cloud.bigquery.query.StructQueryParameter]]: list of parameters
for parameterized query (empty by default)

See:
https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#JobConfigurationQuery.FIELDS.query_parameters


Converted google.cloud.bigquery.table.RangePartitioning into cross reference in:
Optional[google.cloud.bigquery.table.RangePartitioning]:
Configures range-based partitioning for destination table.


Only specify at most one of
xref_time_partitioning or
xref_range_partitioning.

Converted google.cloud.bigquery.job.SchemaUpdateOption into cross reference in:
List[google.cloud.bigquery.job.SchemaUpdateOption]: Specifies
updates to the destination table schema to allow as a side effect of
the query job.


Converted google.cloud.bigquery.external_config.ExternalConfig into cross reference in:
Dict[str, google.cloud.bigquery.external_config.ExternalConfig]:
Definitions for external tables or :data:`None` if not set.

See
https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#JobConfigurationQuery.FIELDS.external_table_definitions


Converted google.cloud.bigquery.table.TimePartitioning into cross reference in:
Optional[google.cloud.bigquery.table.TimePartitioning]: Specifies
time-based partitioning for the destination table.

Only specify at most one of
xref_time_partitioning or
xref_range_partitioning.

Converted google.cloud.bigquery.query.UDFResource into cross reference in:
List[google.cloud.bigquery.query.UDFResource]: user
defined function resources (empty by default)

See:
https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#JobConfigurationQuery.FIELDS.user_defined_function_resources


Converted google.cloud.bigquery.job.WriteDisposition into cross reference in:
google.cloud.bigquery.job.WriteDisposition: Action that occurs if
the destination table already exists.

See
https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#JobConfigurationQuery.FIELDS.write_disposition


Converted google.cloud.bigquery.client.Client into cross reference in:
Optional[google.cloud.bigquery.client.Client]
Converted google.cloud.bigquery.client.Client into cross reference in:
Optional[google.cloud.bigquery.client.Client]
Converted google.cloud.bigquery.client.Client into cross reference in:
google.cloud.bigquery.client.Client
Converted google.cloud.bigquery.client.Client into cross reference in:
Optional[google.cloud.bigquery.client.Client]
Converted google.cloud.bigquery.job.ReservationUsage into cross reference in:
List[google.cloud.bigquery.job.ReservationUsage]
Converted google.cloud.bigquery.client.Client into cross reference in:
Optional[google.cloud.bigquery.client.Client]
Converted google.cloud.bigquery.client.Client into cross reference in:
Optional[google.cloud.bigquery.client.Client]
Converted google.cloud.bigquery.client.Client into cross reference in:
google.cloud.bigquery.client.Client
Converted google.cloud.bigquery.client.Client into cross reference in:
Optional[google.cloud.bigquery.client.Client]
Converted google.cloud.bigquery.job.ReservationUsage into cross reference in:
List[google.cloud.bigquery.job.ReservationUsage]
Converted google.cloud.bigquery.standard_sql.StandardSqlDataType into cross reference in:
Optional[google.cloud.bigquery.standard_sql.StandardSqlDataType]
Converted google.cloud.bigquery.routine.RoutineReference into cross reference in:
Union[str, google.cloud.bigquery.routine.RoutineReference]
Converted google.cloud.bigquery.routine.RoutineArgument into cross reference in:
List[google.cloud.bigquery.routine.RoutineArgument]: Input/output
argument of a function or a stored procedure.

In-place modification is not supported. To set, replace the entire
property value with the modified list of
RoutineArgument objects.


Converted google.cloud.bigquery.routine.RoutineReference into cross reference in:
google.cloud.bigquery.routine.RoutineReference: Reference
describing the ID of this routine.


Converted google.cloud.bigquery.routine.RemoteFunctionOptions into cross reference in:
Optional[google.cloud.bigquery.routine.RemoteFunctionOptions]:
Configures remote function options for a routine.

Converted google.cloud.bigquery.schema.PolicyTagList into cross reference in:
Optional[google.cloud.bigquery.schema.PolicyTagList]: Policy tag list
definition for this field.


Converted google.cloud.bigquery.enums.StandardSqlTypeNames into cross reference in:
typing.Optional[google.cloud.bigquery.enums.StandardSqlTypeNames]
Converted google.cloud.bigquery.standard_sql.StandardSqlDataType into cross reference in:
typing.Optional[google.cloud.bigquery.standard_sql.StandardSqlDataType]
Converted google.cloud.bigquery.standard_sql.StandardSqlField into cross reference in:
typing.Optional[typing.Iterable[google.cloud.bigquery.standard_sql.StandardSqlField]]
Converted google.cloud.bigquery.table.PartitionRange into cross reference in:
Optional[google.cloud.bigquery.table.PartitionRange]
Converted google.cloud.bigquery.table.PartitionRange into cross reference in:
google.cloud.bigquery.table.PartitionRange: Defines the
ranges for range partitioning.

Converted google.cloud.bigquery.table.Table into cross reference in:
Optional[Union[ google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, ]]
Converted google.cloud.bigquery.table.TableReference into cross reference in:
Optional[Union[ google.cloud.bigquery.table.Table, google.cloud.bigquery.table.TableReference, ]]
Converted google.cloud.bigquery.schema.SchemaField into cross reference in:
Optional[Sequence[google.cloud.bigquery.schema.SchemaField]]
Converted google.cloud.bigquery.schema.SchemaField into cross reference in:
List[google.cloud.bigquery.schema.SchemaField]: The subset of
columns to be read from the table.


Converted google.cloud.bigquery.table.TableReference into cross reference in:
Union[google.cloud.bigquery.table.TableReference, str]
Converted google.cloud.bigquery.encryption_configuration.EncryptionConfiguration into cross reference in:
google.cloud.bigquery.encryption_configuration.EncryptionConfiguration: Custom
encryption configuration for the table.

Custom encryption configuration (e.g., Cloud KMS keys) or :data:`None`
if using default encryption.

See `protecting data with Cloud KMS keys
`_
in the BigQuery documentation.


Converted google.cloud.bigquery.table.RangePartitioning into cross reference in:
Optional[google.cloud.bigquery.table.RangePartitioning]:
Configures range-based partitioning for a table.


Only specify at most one of
xref_time_partitioning or
xref_range_partitioning.

Converted google.cloud.bigquery.table.TableReference into cross reference in:
google.cloud.bigquery.table.TableReference
Converted google.cloud.bigquery.table.TimePartitioning into cross reference in:
Optional[google.cloud.bigquery.table.TimePartitioning]: Configures time-based
partitioning for a table.

Only specify at most one of
xref_time_partitioning or
xref_range_partitioning.

Converted google.cloud.bigquery.table.TableReference into cross reference in:
google.cloud.bigquery.table.TableReference
Converted google.cloud.bigquery.table.TimePartitioning into cross reference in:
google.cloud.bigquery.table.TimePartitioning: Configures time-based
partitioning for a table.


Converted google.cloud.bigquery.table.TimePartitioningType into cross reference in:
Optional[google.cloud.bigquery.table.TimePartitioningType]
Converted google.cloud.bigquery.table.TimePartitioningType into cross reference in:
google.cloud.bigquery.table.TimePartitioningType: The type of time
partitioning to use.


Converted google.cloud.bigquery_v2.types.Model into cross reference in:
Sequence[google.cloud.bigquery_v2.types.Model]
Converted google.cloud.bigquery_v2.types.ModelReference into cross reference in:
google.cloud.bigquery_v2.types.ModelReference
Converted google.cloud.bigquery_v2.types.EncryptionConfiguration into cross reference in:
google.cloud.bigquery_v2.types.EncryptionConfiguration
Converted google.cloud.bigquery_v2.types.Model.ModelType into cross reference in:
google.cloud.bigquery_v2.types.Model.ModelType
Converted google.cloud.bigquery_v2.types.Model.TrainingRun into cross reference in:
Sequence[google.cloud.bigquery_v2.types.Model.TrainingRun]
Converted google.cloud.bigquery_v2.types.StandardSqlField into cross reference in:
Sequence[google.cloud.bigquery_v2.types.StandardSqlField]
Converted google.cloud.bigquery_v2.types.StandardSqlField into cross reference in:
Sequence[google.cloud.bigquery_v2.types.StandardSqlField]
Converted google.cloud.bigquery_v2.types.Model.ArimaOrder into cross reference in:
Sequence[google.cloud.bigquery_v2.types.Model.ArimaOrder]
Converted google.cloud.bigquery_v2.types.Model.ArimaFittingMetrics into cross reference in:
Sequence[google.cloud.bigquery_v2.types.Model.ArimaFittingMetrics]
Converted google.cloud.bigquery_v2.types.Model.SeasonalPeriod.SeasonalPeriodType into cross reference in:
Sequence[google.cloud.bigquery_v2.types.Model.SeasonalPeriod.SeasonalPeriodType]
Converted google.cloud.bigquery_v2.types.Model.ArimaForecastingMetrics.ArimaSingleModelForecastingMetrics into cross reference in:
Sequence[google.cloud.bigquery_v2.types.Model.ArimaForecastingMetrics.ArimaSingleModelForecastingMetrics]
Converted google.cloud.bigquery_v2.types.Model.ArimaOrder into cross reference in:
google.cloud.bigquery_v2.types.Model.ArimaOrder
Converted google.cloud.bigquery_v2.types.Model.ArimaFittingMetrics into cross reference in:
google.cloud.bigquery_v2.types.Model.ArimaFittingMetrics
Converted google.cloud.bigquery_v2.types.Model.SeasonalPeriod.SeasonalPeriodType into cross reference in:
Sequence[google.cloud.bigquery_v2.types.Model.SeasonalPeriod.SeasonalPeriodType]
Converted google.cloud.bigquery_v2.types.Model.AggregateClassificationMetrics into cross reference in:
google.cloud.bigquery_v2.types.Model.AggregateClassificationMetrics
Converted google.cloud.bigquery_v2.types.Model.BinaryClassificationMetrics.BinaryConfusionMatrix into cross reference in:
Sequence[google.cloud.bigquery_v2.types.Model.BinaryClassificationMetrics.BinaryConfusionMatrix]
Converted google.cloud.bigquery_v2.types.Model.ClusteringMetrics.Cluster into cross reference in:
Sequence[google.cloud.bigquery_v2.types.Model.ClusteringMetrics.Cluster]
Converted google.cloud.bigquery_v2.types.Model.ClusteringMetrics.Cluster.FeatureValue into cross reference in:
Sequence[google.cloud.bigquery_v2.types.Model.ClusteringMetrics.Cluster.FeatureValue]
Converted google.cloud.bigquery_v2.types.Model.ClusteringMetrics.Cluster.FeatureValue.CategoricalValue into cross reference in:
google.cloud.bigquery_v2.types.Model.ClusteringMetrics.Cluster.FeatureValue.CategoricalValue
Converted google.cloud.bigquery_v2.types.Model.ClusteringMetrics.Cluster.FeatureValue.CategoricalValue.CategoryCount into cross reference in:
Sequence[google.cloud.bigquery_v2.types.Model.ClusteringMetrics.Cluster.FeatureValue.CategoricalValue.CategoryCount]
Converted google.cloud.bigquery_v2.types.TableReference into cross reference in:
google.cloud.bigquery_v2.types.TableReference
Converted google.cloud.bigquery_v2.types.TableReference into cross reference in:
google.cloud.bigquery_v2.types.TableReference
Converted google.cloud.bigquery_v2.types.Model.RegressionMetrics into cross reference in:
google.cloud.bigquery_v2.types.Model.RegressionMetrics
Converted google.cloud.bigquery_v2.types.Model.BinaryClassificationMetrics into cross reference in:
google.cloud.bigquery_v2.types.Model.BinaryClassificationMetrics
Converted google.cloud.bigquery_v2.types.Model.MultiClassClassificationMetrics into cross reference in:
google.cloud.bigquery_v2.types.Model.MultiClassClassificationMetrics
Converted google.cloud.bigquery_v2.types.Model.ClusteringMetrics into cross reference in:
google.cloud.bigquery_v2.types.Model.ClusteringMetrics
Converted google.cloud.bigquery_v2.types.Model.RankingMetrics into cross reference in:
google.cloud.bigquery_v2.types.Model.RankingMetrics
Converted google.cloud.bigquery_v2.types.Model.ArimaForecastingMetrics into cross reference in:
google.cloud.bigquery_v2.types.Model.ArimaForecastingMetrics
Converted google.cloud.bigquery_v2.types.Model.GlobalExplanation.Explanation into cross reference in:
Sequence[google.cloud.bigquery_v2.types.Model.GlobalExplanation.Explanation]
Converted google.cloud.bigquery_v2.types.Model.AggregateClassificationMetrics into cross reference in:
google.cloud.bigquery_v2.types.Model.AggregateClassificationMetrics
Converted google.cloud.bigquery_v2.types.Model.MultiClassClassificationMetrics.ConfusionMatrix into cross reference in:
Sequence[google.cloud.bigquery_v2.types.Model.MultiClassClassificationMetrics.ConfusionMatrix]
Converted google.cloud.bigquery_v2.types.Model.MultiClassClassificationMetrics.ConfusionMatrix.Row into cross reference in:
Sequence[google.cloud.bigquery_v2.types.Model.MultiClassClassificationMetrics.ConfusionMatrix.Row]
Converted google.cloud.bigquery_v2.types.Model.MultiClassClassificationMetrics.ConfusionMatrix.Entry into cross reference in:
Sequence[google.cloud.bigquery_v2.types.Model.MultiClassClassificationMetrics.ConfusionMatrix.Entry]
Converted google.cloud.bigquery_v2.types.Model.TrainingRun.TrainingOptions into cross reference in:
google.cloud.bigquery_v2.types.Model.TrainingRun.TrainingOptions
Converted google.cloud.bigquery_v2.types.Model.TrainingRun.IterationResult into cross reference in:
Sequence[google.cloud.bigquery_v2.types.Model.TrainingRun.IterationResult]
Converted google.cloud.bigquery_v2.types.Model.EvaluationMetrics into cross reference in:
google.cloud.bigquery_v2.types.Model.EvaluationMetrics
Converted google.cloud.bigquery_v2.types.Model.DataSplitResult into cross reference in:
google.cloud.bigquery_v2.types.Model.DataSplitResult
Converted google.cloud.bigquery_v2.types.Model.GlobalExplanation into cross reference in:
Sequence[google.cloud.bigquery_v2.types.Model.GlobalExplanation]
Converted google.cloud.bigquery_v2.types.Model.TrainingRun.IterationResult.ClusterInfo into cross reference in:
Sequence[google.cloud.bigquery_v2.types.Model.TrainingRun.IterationResult.ClusterInfo]
Converted google.cloud.bigquery_v2.types.Model.TrainingRun.IterationResult.ArimaResult.ArimaModelInfo into cross reference in:
Sequence[google.cloud.bigquery_v2.types.Model.TrainingRun.IterationResult.ArimaResult.ArimaModelInfo]
Converted google.cloud.bigquery_v2.types.Model.SeasonalPeriod.SeasonalPeriodType into cross reference in:
Sequence[google.cloud.bigquery_v2.types.Model.SeasonalPeriod.SeasonalPeriodType]
Converted google.cloud.bigquery_v2.types.Model.ArimaOrder into cross reference in:
google.cloud.bigquery_v2.types.Model.ArimaOrder
Converted google.cloud.bigquery_v2.types.Model.TrainingRun.IterationResult.ArimaResult.ArimaCoefficients into cross reference in:
google.cloud.bigquery_v2.types.Model.TrainingRun.IterationResult.ArimaResult.ArimaCoefficients
Converted google.cloud.bigquery_v2.types.Model.ArimaFittingMetrics into cross reference in:
google.cloud.bigquery_v2.types.Model.ArimaFittingMetrics
Converted google.cloud.bigquery_v2.types.Model.SeasonalPeriod.SeasonalPeriodType into cross reference in:
Sequence[google.cloud.bigquery_v2.types.Model.SeasonalPeriod.SeasonalPeriodType]
Converted google.cloud.bigquery_v2.types.Model.LossType into cross reference in:
google.cloud.bigquery_v2.types.Model.LossType
Converted google.cloud.bigquery_v2.types.Model.DataSplitMethod into cross reference in:
google.cloud.bigquery_v2.types.Model.DataSplitMethod
Converted google.cloud.bigquery_v2.types.Model.LearnRateStrategy into cross reference in:
google.cloud.bigquery_v2.types.Model.LearnRateStrategy
Converted google.cloud.bigquery_v2.types.Model.DistanceType into cross reference in:
google.cloud.bigquery_v2.types.Model.DistanceType
Converted google.cloud.bigquery_v2.types.Model.OptimizationStrategy into cross reference in:
google.cloud.bigquery_v2.types.Model.OptimizationStrategy
Converted google.cloud.bigquery_v2.types.Model.FeedbackType into cross reference in:
google.cloud.bigquery_v2.types.Model.FeedbackType
Converted google.cloud.bigquery_v2.types.Model.KmeansEnums.KmeansInitializationMethod into cross reference in:
google.cloud.bigquery_v2.types.Model.KmeansEnums.KmeansInitializationMethod
Converted google.cloud.bigquery_v2.types.Model.ArimaOrder into cross reference in:
google.cloud.bigquery_v2.types.Model.ArimaOrder
Converted google.cloud.bigquery_v2.types.Model.DataFrequency into cross reference in:
google.cloud.bigquery_v2.types.Model.DataFrequency
Converted google.cloud.bigquery_v2.types.Model.HolidayRegion into cross reference in:
google.cloud.bigquery_v2.types.Model.HolidayRegion
Converted google.cloud.bigquery_v2.types.Model into cross reference in:
google.cloud.bigquery_v2.types.Model
Converted google.cloud.bigquery_v2.types.StandardSqlDataType.TypeKind into cross reference in:
google.cloud.bigquery_v2.types.StandardSqlDataType.TypeKind
Converted google.cloud.bigquery_v2.types.StandardSqlStructType into cross reference in:
google.cloud.bigquery_v2.types.StandardSqlStructType
Converted google.cloud.bigquery_v2.types.StandardSqlDataType into cross reference in:
google.cloud.bigquery_v2.types.StandardSqlDataType
Converted google.cloud.bigquery_v2.types.StandardSqlField into cross reference in:
Sequence[google.cloud.bigquery_v2.types.StandardSqlField]
Session ran in 112 seconds (0:01:52)
nox > Session docfx was successful.
nox > Ran multiple sessions:
nox > * unit_noextras-3.7: failed
nox > * unit_noextras-3.12: failed
nox > * unit-3.7: failed
nox > * unit-3.8: failed
nox > * unit-3.12: failed
nox > * cover: failed
nox > * docs: success
nox > * docfx: success
cleanup


[ID: 8343082] Command finished after 681 secs, exit value: 1


Warning: Permanently added 'localhost' (ED25519) to the list of known hosts.
[10:34:31 PDT] Collecting build artifacts from build VM
Build script failed with exit code: 1