Releases: databricks/databricks-sql-python
v4.0.1
Whats changed
- Support for multiple timestamp formats parsing (#533 by @jprakash-db)
- Rename
_user_agent_entryin connect call touser_agent_entryto expose it as a public parameter. (#530 by @shivam2680) - Fix: compatibility with urllib3 versions less than 2.x. (#526 by @shivam2680)
- Support for Python 3.13 and updated dependencies (#510 by @dhirschfeld and @dbaxa)
Full Changelog: v4.0.0...v4.0.1
v3.7.3
Whats changed
- Fix: Unable to poll small results in execute_async function (#515 by @jprakash-db)
- Updated log messages to show the status code and error messages of requests (#511 by @jprakash-db)
- Fix: Incorrect metadata was fetched in case of queries with the same alias (#505 by @jprakash-db)
Full Changelog: v3.6.0...v3.7.3
v3.7.2
Whats changed
- Updated the retry_dela_max and retry_timeout (#497 by @jprakash-db)
Full Changelog: v3.6.0...v3.7.2
v4.0.0
Whats changed
- Split the connector into two separate packages:
databricks-sql-connectoranddatabricks-sqlalchemy. Thedatabricks-sql-connectorpackage contains the core functionality of the connector, while thedatabricks-sqlalchemyGithub contains the SQLAlchemy dialect for the connector. (#444 by @jprakash-db) - Pyarrow dependency is now optional in
databricks-sql-connector. Users needing arrow are supposed to explicitly install pyarrow
Full Changelog: v3.6.0...v4.0.0
v3.7.1
Whats changed
- Relaxed the number of Http retry attempts (#486 by @jprakash-db)
Full Changelog: v3.6.0...v3.7.1
v3.7.0
Whats changed
- Fix: Incorrect number of rows fetched in inline results when fetching results with FETCH_NEXT orientation (#479 by @jprakash-db)
- Updated the doc to specify native parameters are not supported in PUT operation (#477 by @jprakash-db)
- Relax
pyarrowandnumpypin (#452 by @arredond) - Feature: Support for async execute has been added (#463 by @jprakash-db)
- Updated the HTTP retry logic to be similar to the other Databricks drivers (#467 by @jprakash-db)
Full Changelog: v3.6.0...v3.7.0
v3.6.0
What's Changed
- Support encryption headers in the cloud fetch request by @jackyhu-db in #460
Full Changelog: v3.5.0...v3.6.0
v3.5.0
What's Changed
- Create a non pyarrow flow to handle small results for the column set by @jprakash-db in #440
- Fix: On non-retryable error, ensure PySQL includes useful information in error by @shivam2680 in #447
New Contributors
- @shivam2680 made their first contribution in #447
Full Changelog: v3.4.0...v3.5.0
v3.4.0
- Unpin pandas to support v2.2.2 (#416 by @kfollesdal)
- Make OAuth as the default authenticator if no authentication setting is provided (#419 by @jackyhu-db)
- Fix (regression): use SSL options with HTTPS connection pool (#425 by @kravets-levko)
Full Changelog: v3.3.0...v3.4.0
v3.3.0
- Don't retry requests that fail with HTTP code 401 (#408 by @Hodnebo)
- Remove username/password (aka "basic") auth option (#409 by @jackyhu-db)
- Refactor CloudFetch handler to fix numerous issues with it (#405 by @kravets-levko)
- Add option to disable SSL verification for CloudFetch links (#414 by @kravets-levko)
Full Changelog: v3.2.0...v3.3.0
Databricks-managed passwords reached end of life on July 10, 2024. Therefore, Basic auth support was removed from
the library. See https://docs.databricks.com/en/security/auth-authz/password-deprecation.html
The existing option _tls_no_verify=True of sql.connect(...) will now also disable SSL cert verification
(but not the SSL itself) for CloudFetch links. This option should be used as a workaround only, when other ways
to fix SSL certificate errors didn't work.