관리-도구
편집 파일: asyncpg.cpython-38.pyc
U -?�f� � @ s� d Z ddlmZ ddlmZ ddlZddlZddlZddl Z ddl mZ ddl mZ ddlm Z dd lmZ dd lmZ ddlmZ ddlmZ dd lmZ ddlmZ ddlmZ ddlmZ ddlmZ ddlmZ ddlmZ ddlmZ ddlmZ ddlmZ ddlmZ ddl m Z ddl m!Z! ddl m"Z" ddl#m$Z$ ddl#m%Z% ddl&m'Z' ddl(m)Z) dd l(m*Z* dd!l(m+Z+ G d"d#� d#e�Z,G d$d%� d%e'j-�Z.G d&d'� d'e�Z/G d(d)� d)e'j0�Z1G d*d+� d+e�Z2G d,d-� d-e�Z3G d.d/� d/e'j4�Z5G d0d1� d1e'j6�Z7G d2d3� d3e'j8�Z9G d4d5� d5e�Z:G d6d7� d7e�Z;G d8d9� d9e'j<�Z=G d:d;� d;e'j>�Z?G d<d=� d=ej@�ZAG d>d?� d?ejB�ZCG d@dA� dAe'j@jD�ZEG dBdC� dCe'j@jF�ZGG dDdE� dEe'j@jH�ZIG dFdG� dGejJ�ZKG dHdI� dIe'jL�ZMG dJdK� dKeMe'jN�ZOG dLdM� dMe�ZPG dNdO� dOe�ZQG dPdQ� dQe'jR�ZSG dRdS� dSejT�ZUG dTdU� dUejV�ZWG dVdW� dWe�ZXG dXdY� dYe�ZYG dZd[� d[e�ZZG d\d]� d]�Z[G d^d_� d_e[�Z\G d`da� dae$�Z]G dbdc� dce]�Z^G ddde� de�Z_G dfdg� dge�Z`e`ZadS )ha� .. dialect:: postgresql+asyncpg :name: asyncpg :dbapi: asyncpg :connectstring: postgresql+asyncpg://user:password@host:port/dbname[?key=value&key=value...] :url: https://magicstack.github.io/asyncpg/ The asyncpg dialect is SQLAlchemy's first Python asyncio dialect. Using a special asyncio mediation layer, the asyncpg dialect is usable as the backend for the :ref:`SQLAlchemy asyncio <asyncio_toplevel>` extension package. This dialect should normally be used only with the :func:`_asyncio.create_async_engine` engine creation function:: from sqlalchemy.ext.asyncio import create_async_engine engine = create_async_engine("postgresql+asyncpg://user:pass@hostname/dbname") .. versionadded:: 1.4 .. note:: By default asyncpg does not decode the ``json`` and ``jsonb`` types and returns them as strings. SQLAlchemy sets default type decoder for ``json`` and ``jsonb`` types using the python builtin ``json.loads`` function. The json implementation used can be changed by setting the attribute ``json_deserializer`` when creating the engine with :func:`create_engine` or :func:`create_async_engine`. .. _asyncpg_multihost: Multihost Connections -------------------------- The asyncpg dialect features support for multiple fallback hosts in the same way as that of the psycopg2 and psycopg dialects. The syntax is the same, using ``host=<host>:<port>`` combinations as additional query string arguments; however, there is no default port, so all hosts must have a complete port number present, otherwise an exception is raised:: engine = create_async_engine( "postgresql+asyncpg://user:password@/dbname?host=HostA:5432&host=HostB:5432&host=HostC:5432" ) For complete background on this syntax, see :ref:`psycopg2_multi_host`. .. versionadded:: 2.0.18 .. seealso:: :ref:`psycopg2_multi_host` .. _asyncpg_prepared_statement_cache: Prepared Statement Cache -------------------------- The asyncpg SQLAlchemy dialect makes use of ``asyncpg.connection.prepare()`` for all statements. The prepared statement objects are cached after construction which appears to grant a 10% or more performance improvement for statement invocation. The cache is on a per-DBAPI connection basis, which means that the primary storage for prepared statements is within DBAPI connections pooled within the connection pool. The size of this cache defaults to 100 statements per DBAPI connection and may be adjusted using the ``prepared_statement_cache_size`` DBAPI argument (note that while this argument is implemented by SQLAlchemy, it is part of the DBAPI emulation portion of the asyncpg dialect, therefore is handled as a DBAPI argument, not a dialect argument):: engine = create_async_engine("postgresql+asyncpg://user:pass@hostname/dbname?prepared_statement_cache_size=500") To disable the prepared statement cache, use a value of zero:: engine = create_async_engine("postgresql+asyncpg://user:pass@hostname/dbname?prepared_statement_cache_size=0") .. versionadded:: 1.4.0b2 Added ``prepared_statement_cache_size`` for asyncpg. .. warning:: The ``asyncpg`` database driver necessarily uses caches for PostgreSQL type OIDs, which become stale when custom PostgreSQL datatypes such as ``ENUM`` objects are changed via DDL operations. Additionally, prepared statements themselves which are optionally cached by SQLAlchemy's driver as described above may also become "stale" when DDL has been emitted to the PostgreSQL database which modifies the tables or other objects involved in a particular prepared statement. The SQLAlchemy asyncpg dialect will invalidate these caches within its local process when statements that represent DDL are emitted on a local connection, but this is only controllable within a single Python process / database engine. If DDL changes are made from other database engines and/or processes, a running application may encounter asyncpg exceptions ``InvalidCachedStatementError`` and/or ``InternalServerError("cache lookup failed for type <oid>")`` if it refers to pooled database connections which operated upon the previous structures. The SQLAlchemy asyncpg dialect will recover from these error cases when the driver raises these exceptions by clearing its internal caches as well as those of the asyncpg driver in response to them, but cannot prevent them from being raised in the first place if the cached prepared statement or asyncpg type caches have gone stale, nor can it retry the statement as the PostgreSQL transaction is invalidated when these errors occur. .. _asyncpg_prepared_statement_name: Prepared Statement Name with PGBouncer -------------------------------------- By default, asyncpg enumerates prepared statements in numeric order, which can lead to errors if a name has already been taken for another prepared statement. This issue can arise if your application uses database proxies such as PgBouncer to handle connections. One possible workaround is to use dynamic prepared statement names, which asyncpg now supports through an optional ``name`` value for the statement name. This allows you to generate your own unique names that won't conflict with existing ones. To achieve this, you can provide a function that will be called every time a prepared statement is prepared:: from uuid import uuid4 engine = create_async_engine( "postgresql+asyncpg://user:pass@somepgbouncer/dbname", poolclass=NullPool, connect_args={ 'prepared_statement_name_func': lambda: f'__asyncpg_{uuid4()}__', }, ) .. seealso:: https://github.com/MagicStack/asyncpg/issues/837 https://github.com/sqlalchemy/sqlalchemy/issues/6467 .. warning:: When using PGBouncer, to prevent a buildup of useless prepared statements in your application, it's important to use the :class:`.NullPool` pool class, and to configure PgBouncer to use `DISCARD <https://www.postgresql.org/docs/current/sql-discard.html>`_ when returning connections. The DISCARD command is used to release resources held by the db connection, including prepared statements. Without proper setup, prepared statements can accumulate quickly and cause performance issues. Disabling the PostgreSQL JIT to improve ENUM datatype handling --------------------------------------------------------------- Asyncpg has an `issue <https://github.com/MagicStack/asyncpg/issues/727>`_ when using PostgreSQL ENUM datatypes, where upon the creation of new database connections, an expensive query may be emitted in order to retrieve metadata regarding custom types which has been shown to negatively affect performance. To mitigate this issue, the PostgreSQL "jit" setting may be disabled from the client using this setting passed to :func:`_asyncio.create_async_engine`:: engine = create_async_engine( "postgresql+asyncpg://user:password@localhost/tmp", connect_args={"server_settings": {"jit": "off"}}, ) .. seealso:: https://github.com/MagicStack/asyncpg/issues/727 � )�annotations)�dequeN� )�json)�ranges)�ARRAY)�_DECIMAL_TYPES)�_FLOAT_TYPES)� _INT_TYPES)�ENUM)�INTERVAL)�OID)� PGCompiler)� PGDialect)�PGExecutionContext)�PGIdentifierPreparer)�REGCLASS)� REGCONFIG)�BIT)�BYTEA)�CITEXT� )�exc)�pool)�util)�AdaptedConnection)� processors)�sqltypes)�asyncio)�await_fallback)� await_onlyc @ s e Zd ZdZdS )�AsyncpgARRAYTN��__name__� __module__�__qualname__�render_bind_cast� r'