You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

3304 lines
104KB

  1. # mssql/base.py
  2. # Copyright (C) 2005-2021 the SQLAlchemy authors and contributors
  3. # <see AUTHORS file>
  4. #
  5. # This module is part of SQLAlchemy and is released under
  6. # the MIT License: http://www.opensource.org/licenses/mit-license.php
  7. """
  8. .. dialect:: mssql
  9. :name: Microsoft SQL Server
  10. :full_support: 2017
  11. :normal_support: 2012+
  12. :best_effort: 2005+
  13. .. _mssql_external_dialects:
  14. External Dialects
  15. -----------------
  16. In addition to the above DBAPI layers with native SQLAlchemy support, there
  17. are third-party dialects for other DBAPI layers that are compatible
  18. with SQL Server. See the "External Dialects" list on the
  19. :ref:`dialect_toplevel` page.
  20. .. _mssql_identity:
  21. Auto Increment Behavior / IDENTITY Columns
  22. ------------------------------------------
  23. SQL Server provides so-called "auto incrementing" behavior using the
  24. ``IDENTITY`` construct, which can be placed on any single integer column in a
  25. table. SQLAlchemy considers ``IDENTITY`` within its default "autoincrement"
  26. behavior for an integer primary key column, described at
  27. :paramref:`_schema.Column.autoincrement`. This means that by default,
  28. the first
  29. integer primary key column in a :class:`_schema.Table`
  30. will be considered to be the
  31. identity column - unless it is associated with a :class:`.Sequence` - and will
  32. generate DDL as such::
  33. from sqlalchemy import Table, MetaData, Column, Integer
  34. m = MetaData()
  35. t = Table('t', m,
  36. Column('id', Integer, primary_key=True),
  37. Column('x', Integer))
  38. m.create_all(engine)
  39. The above example will generate DDL as:
  40. .. sourcecode:: sql
  41. CREATE TABLE t (
  42. id INTEGER NOT NULL IDENTITY,
  43. x INTEGER NULL,
  44. PRIMARY KEY (id)
  45. )
  46. For the case where this default generation of ``IDENTITY`` is not desired,
  47. specify ``False`` for the :paramref:`_schema.Column.autoincrement` flag,
  48. on the first integer primary key column::
  49. m = MetaData()
  50. t = Table('t', m,
  51. Column('id', Integer, primary_key=True, autoincrement=False),
  52. Column('x', Integer))
  53. m.create_all(engine)
  54. To add the ``IDENTITY`` keyword to a non-primary key column, specify
  55. ``True`` for the :paramref:`_schema.Column.autoincrement` flag on the desired
  56. :class:`_schema.Column` object, and ensure that
  57. :paramref:`_schema.Column.autoincrement`
  58. is set to ``False`` on any integer primary key column::
  59. m = MetaData()
  60. t = Table('t', m,
  61. Column('id', Integer, primary_key=True, autoincrement=False),
  62. Column('x', Integer, autoincrement=True))
  63. m.create_all(engine)
  64. .. versionchanged:: 1.4 Added :class:`_schema.Identity` construct
  65. in a :class:`_schema.Column` to specify the start and increment
  66. parameters of an IDENTITY. These replace
  67. the use of the :class:`.Sequence` object in order to specify these values.
  68. .. deprecated:: 1.4
  69. The ``mssql_identity_start`` and ``mssql_identity_increment`` parameters
  70. to :class:`_schema.Column` are deprecated and should we replaced by
  71. an :class:`_schema.Identity` object. Specifying both ways of configuring
  72. an IDENTITY will result in a compile error.
  73. These options are also no longer returned as part of the
  74. ``dialect_options`` key in :meth:`_reflection.Inspector.get_columns`.
  75. Use the information in the ``identity`` key instead.
  76. .. deprecated:: 1.3
  77. The use of :class:`.Sequence` to specify IDENTITY characteristics is
  78. deprecated and will be removed in a future release. Please use
  79. the :class:`_schema.Identity` object parameters
  80. :paramref:`_schema.Identity.start` and
  81. :paramref:`_schema.Identity.increment`.
  82. .. versionchanged:: 1.4 Removed the ability to use a :class:`.Sequence`
  83. object to modify IDENTITY characteristics. :class:`.Sequence` objects
  84. now only manipulate true T-SQL SEQUENCE types.
  85. .. note::
  86. There can only be one IDENTITY column on the table. When using
  87. ``autoincrement=True`` to enable the IDENTITY keyword, SQLAlchemy does not
  88. guard against multiple columns specifying the option simultaneously. The
  89. SQL Server database will instead reject the ``CREATE TABLE`` statement.
  90. .. note::
  91. An INSERT statement which attempts to provide a value for a column that is
  92. marked with IDENTITY will be rejected by SQL Server. In order for the
  93. value to be accepted, a session-level option "SET IDENTITY_INSERT" must be
  94. enabled. The SQLAlchemy SQL Server dialect will perform this operation
  95. automatically when using a core :class:`_expression.Insert`
  96. construct; if the
  97. execution specifies a value for the IDENTITY column, the "IDENTITY_INSERT"
  98. option will be enabled for the span of that statement's invocation.However,
  99. this scenario is not high performing and should not be relied upon for
  100. normal use. If a table doesn't actually require IDENTITY behavior in its
  101. integer primary key column, the keyword should be disabled when creating
  102. the table by ensuring that ``autoincrement=False`` is set.
  103. Controlling "Start" and "Increment"
  104. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  105. Specific control over the "start" and "increment" values for
  106. the ``IDENTITY`` generator are provided using the
  107. :paramref:`_schema.Identity.start` and :paramref:`_schema.Identity.increment`
  108. parameters passed to the :class:`_schema.Identity` object::
  109. from sqlalchemy import Table, Integer, Column, Identity
  110. test = Table(
  111. 'test', metadata,
  112. Column(
  113. 'id',
  114. Integer,
  115. primary_key=True,
  116. Identity(start=100, increment=10)
  117. ),
  118. Column('name', String(20))
  119. )
  120. The CREATE TABLE for the above :class:`_schema.Table` object would be:
  121. .. sourcecode:: sql
  122. CREATE TABLE test (
  123. id INTEGER NOT NULL IDENTITY(100,10) PRIMARY KEY,
  124. name VARCHAR(20) NULL,
  125. )
  126. .. note::
  127. The :class:`_schema.Identity` object supports many other parameter in
  128. addition to ``start`` and ``increment``. These are not supported by
  129. SQL Server and will be ignored when generating the CREATE TABLE ddl.
  130. .. versionchanged:: 1.3.19 The :class:`_schema.Identity` object is
  131. now used to affect the
  132. ``IDENTITY`` generator for a :class:`_schema.Column` under SQL Server.
  133. Previously, the :class:`.Sequence` object was used. As SQL Server now
  134. supports real sequences as a separate construct, :class:`.Sequence` will be
  135. functional in the normal way starting from SQLAlchemy version 1.4.
  136. INSERT behavior
  137. ^^^^^^^^^^^^^^^^
  138. Handling of the ``IDENTITY`` column at INSERT time involves two key
  139. techniques. The most common is being able to fetch the "last inserted value"
  140. for a given ``IDENTITY`` column, a process which SQLAlchemy performs
  141. implicitly in many cases, most importantly within the ORM.
  142. The process for fetching this value has several variants:
  143. * In the vast majority of cases, RETURNING is used in conjunction with INSERT
  144. statements on SQL Server in order to get newly generated primary key values:
  145. .. sourcecode:: sql
  146. INSERT INTO t (x) OUTPUT inserted.id VALUES (?)
  147. * When RETURNING is not available or has been disabled via
  148. ``implicit_returning=False``, either the ``scope_identity()`` function or
  149. the ``@@identity`` variable is used; behavior varies by backend:
  150. * when using PyODBC, the phrase ``; select scope_identity()`` will be
  151. appended to the end of the INSERT statement; a second result set will be
  152. fetched in order to receive the value. Given a table as::
  153. t = Table('t', m, Column('id', Integer, primary_key=True),
  154. Column('x', Integer),
  155. implicit_returning=False)
  156. an INSERT will look like:
  157. .. sourcecode:: sql
  158. INSERT INTO t (x) VALUES (?); select scope_identity()
  159. * Other dialects such as pymssql will call upon
  160. ``SELECT scope_identity() AS lastrowid`` subsequent to an INSERT
  161. statement. If the flag ``use_scope_identity=False`` is passed to
  162. :func:`_sa.create_engine`,
  163. the statement ``SELECT @@identity AS lastrowid``
  164. is used instead.
  165. A table that contains an ``IDENTITY`` column will prohibit an INSERT statement
  166. that refers to the identity column explicitly. The SQLAlchemy dialect will
  167. detect when an INSERT construct, created using a core
  168. :func:`_expression.insert`
  169. construct (not a plain string SQL), refers to the identity column, and
  170. in this case will emit ``SET IDENTITY_INSERT ON`` prior to the insert
  171. statement proceeding, and ``SET IDENTITY_INSERT OFF`` subsequent to the
  172. execution. Given this example::
  173. m = MetaData()
  174. t = Table('t', m, Column('id', Integer, primary_key=True),
  175. Column('x', Integer))
  176. m.create_all(engine)
  177. with engine.begin() as conn:
  178. conn.execute(t.insert(), {'id': 1, 'x':1}, {'id':2, 'x':2})
  179. The above column will be created with IDENTITY, however the INSERT statement
  180. we emit is specifying explicit values. In the echo output we can see
  181. how SQLAlchemy handles this:
  182. .. sourcecode:: sql
  183. CREATE TABLE t (
  184. id INTEGER NOT NULL IDENTITY(1,1),
  185. x INTEGER NULL,
  186. PRIMARY KEY (id)
  187. )
  188. COMMIT
  189. SET IDENTITY_INSERT t ON
  190. INSERT INTO t (id, x) VALUES (?, ?)
  191. ((1, 1), (2, 2))
  192. SET IDENTITY_INSERT t OFF
  193. COMMIT
  194. This
  195. is an auxiliary use case suitable for testing and bulk insert scenarios.
  196. SEQUENCE support
  197. ----------------
  198. The :class:`.Sequence` object now creates "real" sequences, i.e.,
  199. ``CREATE SEQUENCE``. To provide compatibility with other dialects,
  200. :class:`.Sequence` defaults to a start value of 1, even though the
  201. T-SQL defaults is -9223372036854775808.
  202. .. versionadded:: 1.4.0
  203. MAX on VARCHAR / NVARCHAR
  204. -------------------------
  205. SQL Server supports the special string "MAX" within the
  206. :class:`_types.VARCHAR` and :class:`_types.NVARCHAR` datatypes,
  207. to indicate "maximum length possible". The dialect currently handles this as
  208. a length of "None" in the base type, rather than supplying a
  209. dialect-specific version of these types, so that a base type
  210. specified such as ``VARCHAR(None)`` can assume "unlengthed" behavior on
  211. more than one backend without using dialect-specific types.
  212. To build a SQL Server VARCHAR or NVARCHAR with MAX length, use None::
  213. my_table = Table(
  214. 'my_table', metadata,
  215. Column('my_data', VARCHAR(None)),
  216. Column('my_n_data', NVARCHAR(None))
  217. )
  218. Collation Support
  219. -----------------
  220. Character collations are supported by the base string types,
  221. specified by the string argument "collation"::
  222. from sqlalchemy import VARCHAR
  223. Column('login', VARCHAR(32, collation='Latin1_General_CI_AS'))
  224. When such a column is associated with a :class:`_schema.Table`, the
  225. CREATE TABLE statement for this column will yield::
  226. login VARCHAR(32) COLLATE Latin1_General_CI_AS NULL
  227. LIMIT/OFFSET Support
  228. --------------------
  229. MSSQL has added support for LIMIT / OFFSET as of SQL Server 2012, via the
  230. "OFFSET n ROWS" and "FETCH NEXT n ROWS" clauses. SQLAlchemy supports these
  231. syntaxes automatically if SQL Server 2012 or greater is detected.
  232. .. versionchanged:: 1.4 support added for SQL Server "OFFSET n ROWS" and
  233. "FETCH NEXT n ROWS" syntax.
  234. For statements that specify only LIMIT and no OFFSET, all versions of SQL
  235. Server support the TOP keyword. This syntax is used for all SQL Server
  236. versions when no OFFSET clause is present. A statement such as::
  237. select(some_table).limit(5)
  238. will render similarly to::
  239. SELECT TOP 5 col1, col2.. FROM table
  240. For versions of SQL Server prior to SQL Server 2012, a statement that uses
  241. LIMIT and OFFSET, or just OFFSET alone, will be rendered using the
  242. ``ROW_NUMBER()`` window function. A statement such as::
  243. select(some_table).order_by(some_table.c.col3).limit(5).offset(10)
  244. will render similarly to::
  245. SELECT anon_1.col1, anon_1.col2 FROM (SELECT col1, col2,
  246. ROW_NUMBER() OVER (ORDER BY col3) AS
  247. mssql_rn FROM table WHERE t.x = :x_1) AS
  248. anon_1 WHERE mssql_rn > :param_1 AND mssql_rn <= :param_2 + :param_1
  249. Note that when using LIMIT and/or OFFSET, whether using the older
  250. or newer SQL Server syntaxes, the statement must have an ORDER BY as well,
  251. else a :class:`.CompileError` is raised.
  252. .. _mssql_isolation_level:
  253. Transaction Isolation Level
  254. ---------------------------
  255. All SQL Server dialects support setting of transaction isolation level
  256. both via a dialect-specific parameter
  257. :paramref:`_sa.create_engine.isolation_level`
  258. accepted by :func:`_sa.create_engine`,
  259. as well as the :paramref:`.Connection.execution_options.isolation_level`
  260. argument as passed to
  261. :meth:`_engine.Connection.execution_options`.
  262. This feature works by issuing the
  263. command ``SET TRANSACTION ISOLATION LEVEL <level>`` for
  264. each new connection.
  265. To set isolation level using :func:`_sa.create_engine`::
  266. engine = create_engine(
  267. "mssql+pyodbc://scott:tiger@ms_2008",
  268. isolation_level="REPEATABLE READ"
  269. )
  270. To set using per-connection execution options::
  271. connection = engine.connect()
  272. connection = connection.execution_options(
  273. isolation_level="READ COMMITTED"
  274. )
  275. Valid values for ``isolation_level`` include:
  276. * ``AUTOCOMMIT`` - pyodbc / pymssql-specific
  277. * ``READ COMMITTED``
  278. * ``READ UNCOMMITTED``
  279. * ``REPEATABLE READ``
  280. * ``SERIALIZABLE``
  281. * ``SNAPSHOT`` - specific to SQL Server
  282. .. versionadded:: 1.2 added AUTOCOMMIT isolation level setting
  283. .. seealso::
  284. :ref:`dbapi_autocommit`
  285. Nullability
  286. -----------
  287. MSSQL has support for three levels of column nullability. The default
  288. nullability allows nulls and is explicit in the CREATE TABLE
  289. construct::
  290. name VARCHAR(20) NULL
  291. If ``nullable=None`` is specified then no specification is made. In
  292. other words the database's configured default is used. This will
  293. render::
  294. name VARCHAR(20)
  295. If ``nullable`` is ``True`` or ``False`` then the column will be
  296. ``NULL`` or ``NOT NULL`` respectively.
  297. Date / Time Handling
  298. --------------------
  299. DATE and TIME are supported. Bind parameters are converted
  300. to datetime.datetime() objects as required by most MSSQL drivers,
  301. and results are processed from strings if needed.
  302. The DATE and TIME types are not available for MSSQL 2005 and
  303. previous - if a server version below 2008 is detected, DDL
  304. for these types will be issued as DATETIME.
  305. .. _mssql_large_type_deprecation:
  306. Large Text/Binary Type Deprecation
  307. ----------------------------------
  308. Per
  309. `SQL Server 2012/2014 Documentation <http://technet.microsoft.com/en-us/library/ms187993.aspx>`_,
  310. the ``NTEXT``, ``TEXT`` and ``IMAGE`` datatypes are to be removed from SQL
  311. Server in a future release. SQLAlchemy normally relates these types to the
  312. :class:`.UnicodeText`, :class:`_expression.TextClause` and
  313. :class:`.LargeBinary` datatypes.
  314. In order to accommodate this change, a new flag ``deprecate_large_types``
  315. is added to the dialect, which will be automatically set based on detection
  316. of the server version in use, if not otherwise set by the user. The
  317. behavior of this flag is as follows:
  318. * When this flag is ``True``, the :class:`.UnicodeText`,
  319. :class:`_expression.TextClause` and
  320. :class:`.LargeBinary` datatypes, when used to render DDL, will render the
  321. types ``NVARCHAR(max)``, ``VARCHAR(max)``, and ``VARBINARY(max)``,
  322. respectively. This is a new behavior as of the addition of this flag.
  323. * When this flag is ``False``, the :class:`.UnicodeText`,
  324. :class:`_expression.TextClause` and
  325. :class:`.LargeBinary` datatypes, when used to render DDL, will render the
  326. types ``NTEXT``, ``TEXT``, and ``IMAGE``,
  327. respectively. This is the long-standing behavior of these types.
  328. * The flag begins with the value ``None``, before a database connection is
  329. established. If the dialect is used to render DDL without the flag being
  330. set, it is interpreted the same as ``False``.
  331. * On first connection, the dialect detects if SQL Server version 2012 or
  332. greater is in use; if the flag is still at ``None``, it sets it to ``True``
  333. or ``False`` based on whether 2012 or greater is detected.
  334. * The flag can be set to either ``True`` or ``False`` when the dialect
  335. is created, typically via :func:`_sa.create_engine`::
  336. eng = create_engine("mssql+pymssql://user:pass@host/db",
  337. deprecate_large_types=True)
  338. * Complete control over whether the "old" or "new" types are rendered is
  339. available in all SQLAlchemy versions by using the UPPERCASE type objects
  340. instead: :class:`_types.NVARCHAR`, :class:`_types.VARCHAR`,
  341. :class:`_types.VARBINARY`, :class:`_types.TEXT`, :class:`_mssql.NTEXT`,
  342. :class:`_mssql.IMAGE`
  343. will always remain fixed and always output exactly that
  344. type.
  345. .. versionadded:: 1.0.0
  346. .. _multipart_schema_names:
  347. Multipart Schema Names
  348. ----------------------
  349. SQL Server schemas sometimes require multiple parts to their "schema"
  350. qualifier, that is, including the database name and owner name as separate
  351. tokens, such as ``mydatabase.dbo.some_table``. These multipart names can be set
  352. at once using the :paramref:`_schema.Table.schema` argument of
  353. :class:`_schema.Table`::
  354. Table(
  355. "some_table", metadata,
  356. Column("q", String(50)),
  357. schema="mydatabase.dbo"
  358. )
  359. When performing operations such as table or component reflection, a schema
  360. argument that contains a dot will be split into separate
  361. "database" and "owner" components in order to correctly query the SQL
  362. Server information schema tables, as these two values are stored separately.
  363. Additionally, when rendering the schema name for DDL or SQL, the two
  364. components will be quoted separately for case sensitive names and other
  365. special characters. Given an argument as below::
  366. Table(
  367. "some_table", metadata,
  368. Column("q", String(50)),
  369. schema="MyDataBase.dbo"
  370. )
  371. The above schema would be rendered as ``[MyDataBase].dbo``, and also in
  372. reflection, would be reflected using "dbo" as the owner and "MyDataBase"
  373. as the database name.
  374. To control how the schema name is broken into database / owner,
  375. specify brackets (which in SQL Server are quoting characters) in the name.
  376. Below, the "owner" will be considered as ``MyDataBase.dbo`` and the
  377. "database" will be None::
  378. Table(
  379. "some_table", metadata,
  380. Column("q", String(50)),
  381. schema="[MyDataBase.dbo]"
  382. )
  383. To individually specify both database and owner name with special characters
  384. or embedded dots, use two sets of brackets::
  385. Table(
  386. "some_table", metadata,
  387. Column("q", String(50)),
  388. schema="[MyDataBase.Period].[MyOwner.Dot]"
  389. )
  390. .. versionchanged:: 1.2 the SQL Server dialect now treats brackets as
  391. identifier delimeters splitting the schema into separate database
  392. and owner tokens, to allow dots within either name itself.
  393. .. _legacy_schema_rendering:
  394. Legacy Schema Mode
  395. ------------------
  396. Very old versions of the MSSQL dialect introduced the behavior such that a
  397. schema-qualified table would be auto-aliased when used in a
  398. SELECT statement; given a table::
  399. account_table = Table(
  400. 'account', metadata,
  401. Column('id', Integer, primary_key=True),
  402. Column('info', String(100)),
  403. schema="customer_schema"
  404. )
  405. this legacy mode of rendering would assume that "customer_schema.account"
  406. would not be accepted by all parts of the SQL statement, as illustrated
  407. below::
  408. >>> eng = create_engine("mssql+pymssql://mydsn", legacy_schema_aliasing=True)
  409. >>> print(account_table.select().compile(eng))
  410. SELECT account_1.id, account_1.info
  411. FROM customer_schema.account AS account_1
  412. This mode of behavior is now off by default, as it appears to have served
  413. no purpose; however in the case that legacy applications rely upon it,
  414. it is available using the ``legacy_schema_aliasing`` argument to
  415. :func:`_sa.create_engine` as illustrated above.
  416. .. versionchanged:: 1.1 the ``legacy_schema_aliasing`` flag introduced
  417. in version 1.0.5 to allow disabling of legacy mode for schemas now
  418. defaults to False.
  419. .. deprecated:: 1.4
  420. The ``legacy_schema_aliasing`` flag is now
  421. deprecated and will be removed in a future release.
  422. .. _mssql_indexes:
  423. Clustered Index Support
  424. -----------------------
  425. The MSSQL dialect supports clustered indexes (and primary keys) via the
  426. ``mssql_clustered`` option. This option is available to :class:`.Index`,
  427. :class:`.UniqueConstraint`. and :class:`.PrimaryKeyConstraint`.
  428. To generate a clustered index::
  429. Index("my_index", table.c.x, mssql_clustered=True)
  430. which renders the index as ``CREATE CLUSTERED INDEX my_index ON table (x)``.
  431. To generate a clustered primary key use::
  432. Table('my_table', metadata,
  433. Column('x', ...),
  434. Column('y', ...),
  435. PrimaryKeyConstraint("x", "y", mssql_clustered=True))
  436. which will render the table, for example, as::
  437. CREATE TABLE my_table (x INTEGER NOT NULL, y INTEGER NOT NULL,
  438. PRIMARY KEY CLUSTERED (x, y))
  439. Similarly, we can generate a clustered unique constraint using::
  440. Table('my_table', metadata,
  441. Column('x', ...),
  442. Column('y', ...),
  443. PrimaryKeyConstraint("x"),
  444. UniqueConstraint("y", mssql_clustered=True),
  445. )
  446. To explicitly request a non-clustered primary key (for example, when
  447. a separate clustered index is desired), use::
  448. Table('my_table', metadata,
  449. Column('x', ...),
  450. Column('y', ...),
  451. PrimaryKeyConstraint("x", "y", mssql_clustered=False))
  452. which will render the table, for example, as::
  453. CREATE TABLE my_table (x INTEGER NOT NULL, y INTEGER NOT NULL,
  454. PRIMARY KEY NONCLUSTERED (x, y))
  455. .. versionchanged:: 1.1 the ``mssql_clustered`` option now defaults
  456. to None, rather than False. ``mssql_clustered=False`` now explicitly
  457. renders the NONCLUSTERED clause, whereas None omits the CLUSTERED
  458. clause entirely, allowing SQL Server defaults to take effect.
  459. MSSQL-Specific Index Options
  460. -----------------------------
  461. In addition to clustering, the MSSQL dialect supports other special options
  462. for :class:`.Index`.
  463. INCLUDE
  464. ^^^^^^^
  465. The ``mssql_include`` option renders INCLUDE(colname) for the given string
  466. names::
  467. Index("my_index", table.c.x, mssql_include=['y'])
  468. would render the index as ``CREATE INDEX my_index ON table (x) INCLUDE (y)``
  469. .. _mssql_index_where:
  470. Filtered Indexes
  471. ^^^^^^^^^^^^^^^^
  472. The ``mssql_where`` option renders WHERE(condition) for the given string
  473. names::
  474. Index("my_index", table.c.x, mssql_where=table.c.x > 10)
  475. would render the index as ``CREATE INDEX my_index ON table (x) WHERE x > 10``.
  476. .. versionadded:: 1.3.4
  477. Index ordering
  478. ^^^^^^^^^^^^^^
  479. Index ordering is available via functional expressions, such as::
  480. Index("my_index", table.c.x.desc())
  481. would render the index as ``CREATE INDEX my_index ON table (x DESC)``
  482. .. seealso::
  483. :ref:`schema_indexes_functional`
  484. Compatibility Levels
  485. --------------------
  486. MSSQL supports the notion of setting compatibility levels at the
  487. database level. This allows, for instance, to run a database that
  488. is compatible with SQL2000 while running on a SQL2005 database
  489. server. ``server_version_info`` will always return the database
  490. server version information (in this case SQL2005) and not the
  491. compatibility level information. Because of this, if running under
  492. a backwards compatibility mode SQLAlchemy may attempt to use T-SQL
  493. statements that are unable to be parsed by the database server.
  494. Triggers
  495. --------
  496. SQLAlchemy by default uses OUTPUT INSERTED to get at newly
  497. generated primary key values via IDENTITY columns or other
  498. server side defaults. MS-SQL does not
  499. allow the usage of OUTPUT INSERTED on tables that have triggers.
  500. To disable the usage of OUTPUT INSERTED on a per-table basis,
  501. specify ``implicit_returning=False`` for each :class:`_schema.Table`
  502. which has triggers::
  503. Table('mytable', metadata,
  504. Column('id', Integer, primary_key=True),
  505. # ...,
  506. implicit_returning=False
  507. )
  508. Declarative form::
  509. class MyClass(Base):
  510. # ...
  511. __table_args__ = {'implicit_returning':False}
  512. This option can also be specified engine-wide using the
  513. ``implicit_returning=False`` argument on :func:`_sa.create_engine`.
  514. .. _mssql_rowcount_versioning:
  515. Rowcount Support / ORM Versioning
  516. ---------------------------------
  517. The SQL Server drivers may have limited ability to return the number
  518. of rows updated from an UPDATE or DELETE statement.
  519. As of this writing, the PyODBC driver is not able to return a rowcount when
  520. OUTPUT INSERTED is used. This impacts the SQLAlchemy ORM's versioning feature
  521. in many cases where server-side value generators are in use in that while the
  522. versioning operations can succeed, the ORM cannot always check that an UPDATE
  523. or DELETE statement matched the number of rows expected, which is how it
  524. verifies that the version identifier matched. When this condition occurs, a
  525. warning will be emitted but the operation will proceed.
  526. The use of OUTPUT INSERTED can be disabled by setting the
  527. :paramref:`_schema.Table.implicit_returning` flag to ``False`` on a particular
  528. :class:`_schema.Table`, which in declarative looks like::
  529. class MyTable(Base):
  530. __tablename__ = 'mytable'
  531. id = Column(Integer, primary_key=True)
  532. stuff = Column(String(10))
  533. timestamp = Column(TIMESTAMP(), default=text('DEFAULT'))
  534. __mapper_args__ = {
  535. 'version_id_col': timestamp,
  536. 'version_id_generator': False,
  537. }
  538. __table_args__ = {
  539. 'implicit_returning': False
  540. }
  541. Enabling Snapshot Isolation
  542. ---------------------------
  543. SQL Server has a default transaction
  544. isolation mode that locks entire tables, and causes even mildly concurrent
  545. applications to have long held locks and frequent deadlocks.
  546. Enabling snapshot isolation for the database as a whole is recommended
  547. for modern levels of concurrency support. This is accomplished via the
  548. following ALTER DATABASE commands executed at the SQL prompt::
  549. ALTER DATABASE MyDatabase SET ALLOW_SNAPSHOT_ISOLATION ON
  550. ALTER DATABASE MyDatabase SET READ_COMMITTED_SNAPSHOT ON
  551. Background on SQL Server snapshot isolation is available at
  552. http://msdn.microsoft.com/en-us/library/ms175095.aspx.
  553. """ # noqa
  554. import codecs
  555. import datetime
  556. import operator
  557. import re
  558. from . import information_schema as ischema
  559. from .json import JSON
  560. from .json import JSONIndexType
  561. from .json import JSONPathType
  562. from ... import exc
  563. from ... import Identity
  564. from ... import schema as sa_schema
  565. from ... import Sequence
  566. from ... import sql
  567. from ... import types as sqltypes
  568. from ... import util
  569. from ...engine import cursor as _cursor
  570. from ...engine import default
  571. from ...engine import reflection
  572. from ...sql import coercions
  573. from ...sql import compiler
  574. from ...sql import elements
  575. from ...sql import expression
  576. from ...sql import func
  577. from ...sql import quoted_name
  578. from ...sql import roles
  579. from ...sql import util as sql_util
  580. from ...types import BIGINT
  581. from ...types import BINARY
  582. from ...types import CHAR
  583. from ...types import DATE
  584. from ...types import DATETIME
  585. from ...types import DECIMAL
  586. from ...types import FLOAT
  587. from ...types import INTEGER
  588. from ...types import NCHAR
  589. from ...types import NUMERIC
  590. from ...types import NVARCHAR
  591. from ...types import SMALLINT
  592. from ...types import TEXT
  593. from ...types import VARCHAR
  594. from ...util import compat
  595. from ...util import update_wrapper
  596. from ...util.langhelpers import public_factory
  597. # http://sqlserverbuilds.blogspot.com/
  598. MS_2017_VERSION = (14,)
  599. MS_2016_VERSION = (13,)
  600. MS_2014_VERSION = (12,)
  601. MS_2012_VERSION = (11,)
  602. MS_2008_VERSION = (10,)
  603. MS_2005_VERSION = (9,)
  604. MS_2000_VERSION = (8,)
  605. RESERVED_WORDS = set(
  606. [
  607. "add",
  608. "all",
  609. "alter",
  610. "and",
  611. "any",
  612. "as",
  613. "asc",
  614. "authorization",
  615. "backup",
  616. "begin",
  617. "between",
  618. "break",
  619. "browse",
  620. "bulk",
  621. "by",
  622. "cascade",
  623. "case",
  624. "check",
  625. "checkpoint",
  626. "close",
  627. "clustered",
  628. "coalesce",
  629. "collate",
  630. "column",
  631. "commit",
  632. "compute",
  633. "constraint",
  634. "contains",
  635. "containstable",
  636. "continue",
  637. "convert",
  638. "create",
  639. "cross",
  640. "current",
  641. "current_date",
  642. "current_time",
  643. "current_timestamp",
  644. "current_user",
  645. "cursor",
  646. "database",
  647. "dbcc",
  648. "deallocate",
  649. "declare",
  650. "default",
  651. "delete",
  652. "deny",
  653. "desc",
  654. "disk",
  655. "distinct",
  656. "distributed",
  657. "double",
  658. "drop",
  659. "dump",
  660. "else",
  661. "end",
  662. "errlvl",
  663. "escape",
  664. "except",
  665. "exec",
  666. "execute",
  667. "exists",
  668. "exit",
  669. "external",
  670. "fetch",
  671. "file",
  672. "fillfactor",
  673. "for",
  674. "foreign",
  675. "freetext",
  676. "freetexttable",
  677. "from",
  678. "full",
  679. "function",
  680. "goto",
  681. "grant",
  682. "group",
  683. "having",
  684. "holdlock",
  685. "identity",
  686. "identity_insert",
  687. "identitycol",
  688. "if",
  689. "in",
  690. "index",
  691. "inner",
  692. "insert",
  693. "intersect",
  694. "into",
  695. "is",
  696. "join",
  697. "key",
  698. "kill",
  699. "left",
  700. "like",
  701. "lineno",
  702. "load",
  703. "merge",
  704. "national",
  705. "nocheck",
  706. "nonclustered",
  707. "not",
  708. "null",
  709. "nullif",
  710. "of",
  711. "off",
  712. "offsets",
  713. "on",
  714. "open",
  715. "opendatasource",
  716. "openquery",
  717. "openrowset",
  718. "openxml",
  719. "option",
  720. "or",
  721. "order",
  722. "outer",
  723. "over",
  724. "percent",
  725. "pivot",
  726. "plan",
  727. "precision",
  728. "primary",
  729. "print",
  730. "proc",
  731. "procedure",
  732. "public",
  733. "raiserror",
  734. "read",
  735. "readtext",
  736. "reconfigure",
  737. "references",
  738. "replication",
  739. "restore",
  740. "restrict",
  741. "return",
  742. "revert",
  743. "revoke",
  744. "right",
  745. "rollback",
  746. "rowcount",
  747. "rowguidcol",
  748. "rule",
  749. "save",
  750. "schema",
  751. "securityaudit",
  752. "select",
  753. "session_user",
  754. "set",
  755. "setuser",
  756. "shutdown",
  757. "some",
  758. "statistics",
  759. "system_user",
  760. "table",
  761. "tablesample",
  762. "textsize",
  763. "then",
  764. "to",
  765. "top",
  766. "tran",
  767. "transaction",
  768. "trigger",
  769. "truncate",
  770. "tsequal",
  771. "union",
  772. "unique",
  773. "unpivot",
  774. "update",
  775. "updatetext",
  776. "use",
  777. "user",
  778. "values",
  779. "varying",
  780. "view",
  781. "waitfor",
  782. "when",
  783. "where",
  784. "while",
  785. "with",
  786. "writetext",
  787. ]
  788. )
  789. class REAL(sqltypes.REAL):
  790. __visit_name__ = "REAL"
  791. def __init__(self, **kw):
  792. # REAL is a synonym for FLOAT(24) on SQL server.
  793. # it is only accepted as the word "REAL" in DDL, the numeric
  794. # precision value is not allowed to be present
  795. kw.setdefault("precision", 24)
  796. super(REAL, self).__init__(**kw)
  797. class TINYINT(sqltypes.Integer):
  798. __visit_name__ = "TINYINT"
  799. # MSSQL DATE/TIME types have varied behavior, sometimes returning
  800. # strings. MSDate/TIME check for everything, and always
  801. # filter bind parameters into datetime objects (required by pyodbc,
  802. # not sure about other dialects).
  803. class _MSDate(sqltypes.Date):
  804. def bind_processor(self, dialect):
  805. def process(value):
  806. if type(value) == datetime.date:
  807. return datetime.datetime(value.year, value.month, value.day)
  808. else:
  809. return value
  810. return process
  811. _reg = re.compile(r"(\d+)-(\d+)-(\d+)")
  812. def result_processor(self, dialect, coltype):
  813. def process(value):
  814. if isinstance(value, datetime.datetime):
  815. return value.date()
  816. elif isinstance(value, util.string_types):
  817. m = self._reg.match(value)
  818. if not m:
  819. raise ValueError(
  820. "could not parse %r as a date value" % (value,)
  821. )
  822. return datetime.date(*[int(x or 0) for x in m.groups()])
  823. else:
  824. return value
  825. return process
  826. class TIME(sqltypes.TIME):
  827. def __init__(self, precision=None, **kwargs):
  828. self.precision = precision
  829. super(TIME, self).__init__()
  830. __zero_date = datetime.date(1900, 1, 1)
  831. def bind_processor(self, dialect):
  832. def process(value):
  833. if isinstance(value, datetime.datetime):
  834. value = datetime.datetime.combine(
  835. self.__zero_date, value.time()
  836. )
  837. elif isinstance(value, datetime.time):
  838. """issue #5339
  839. per: https://github.com/mkleehammer/pyodbc/wiki/Tips-and-Tricks-by-Database-Platform#time-columns
  840. pass TIME value as string
  841. """ # noqa
  842. value = str(value)
  843. return value
  844. return process
  845. _reg = re.compile(r"(\d+):(\d+):(\d+)(?:\.(\d{0,6}))?")
  846. def result_processor(self, dialect, coltype):
  847. def process(value):
  848. if isinstance(value, datetime.datetime):
  849. return value.time()
  850. elif isinstance(value, util.string_types):
  851. m = self._reg.match(value)
  852. if not m:
  853. raise ValueError(
  854. "could not parse %r as a time value" % (value,)
  855. )
  856. return datetime.time(*[int(x or 0) for x in m.groups()])
  857. else:
  858. return value
  859. return process
  860. _MSTime = TIME
  861. class _DateTimeBase(object):
  862. def bind_processor(self, dialect):
  863. def process(value):
  864. if type(value) == datetime.date:
  865. return datetime.datetime(value.year, value.month, value.day)
  866. else:
  867. return value
  868. return process
  869. class _MSDateTime(_DateTimeBase, sqltypes.DateTime):
  870. pass
  871. class SMALLDATETIME(_DateTimeBase, sqltypes.DateTime):
  872. __visit_name__ = "SMALLDATETIME"
  873. class DATETIME2(_DateTimeBase, sqltypes.DateTime):
  874. __visit_name__ = "DATETIME2"
  875. def __init__(self, precision=None, **kw):
  876. super(DATETIME2, self).__init__(**kw)
  877. self.precision = precision
  878. class DATETIMEOFFSET(_DateTimeBase, sqltypes.DateTime):
  879. __visit_name__ = "DATETIMEOFFSET"
  880. def __init__(self, precision=None, **kw):
  881. super(DATETIMEOFFSET, self).__init__(**kw)
  882. self.precision = precision
  883. class _UnicodeLiteral(object):
  884. def literal_processor(self, dialect):
  885. def process(value):
  886. value = value.replace("'", "''")
  887. if dialect.identifier_preparer._double_percents:
  888. value = value.replace("%", "%%")
  889. return "N'%s'" % value
  890. return process
  891. class _MSUnicode(_UnicodeLiteral, sqltypes.Unicode):
  892. pass
  893. class _MSUnicodeText(_UnicodeLiteral, sqltypes.UnicodeText):
  894. pass
  895. class TIMESTAMP(sqltypes._Binary):
  896. """Implement the SQL Server TIMESTAMP type.
  897. Note this is **completely different** than the SQL Standard
  898. TIMESTAMP type, which is not supported by SQL Server. It
  899. is a read-only datatype that does not support INSERT of values.
  900. .. versionadded:: 1.2
  901. .. seealso::
  902. :class:`_mssql.ROWVERSION`
  903. """
  904. __visit_name__ = "TIMESTAMP"
  905. # expected by _Binary to be present
  906. length = None
  907. def __init__(self, convert_int=False):
  908. """Construct a TIMESTAMP or ROWVERSION type.
  909. :param convert_int: if True, binary integer values will
  910. be converted to integers on read.
  911. .. versionadded:: 1.2
  912. """
  913. self.convert_int = convert_int
  914. def result_processor(self, dialect, coltype):
  915. super_ = super(TIMESTAMP, self).result_processor(dialect, coltype)
  916. if self.convert_int:
  917. def process(value):
  918. value = super_(value)
  919. if value is not None:
  920. # https://stackoverflow.com/a/30403242/34549
  921. value = int(codecs.encode(value, "hex"), 16)
  922. return value
  923. return process
  924. else:
  925. return super_
  926. class ROWVERSION(TIMESTAMP):
  927. """Implement the SQL Server ROWVERSION type.
  928. The ROWVERSION datatype is a SQL Server synonym for the TIMESTAMP
  929. datatype, however current SQL Server documentation suggests using
  930. ROWVERSION for new datatypes going forward.
  931. The ROWVERSION datatype does **not** reflect (e.g. introspect) from the
  932. database as itself; the returned datatype will be
  933. :class:`_mssql.TIMESTAMP`.
  934. This is a read-only datatype that does not support INSERT of values.
  935. .. versionadded:: 1.2
  936. .. seealso::
  937. :class:`_mssql.TIMESTAMP`
  938. """
  939. __visit_name__ = "ROWVERSION"
  940. class NTEXT(sqltypes.UnicodeText):
  941. """MSSQL NTEXT type, for variable-length unicode text up to 2^30
  942. characters."""
  943. __visit_name__ = "NTEXT"
  944. class VARBINARY(sqltypes.VARBINARY, sqltypes.LargeBinary):
  945. """The MSSQL VARBINARY type.
  946. This type is present to support "deprecate_large_types" mode where
  947. either ``VARBINARY(max)`` or IMAGE is rendered. Otherwise, this type
  948. object is redundant vs. :class:`_types.VARBINARY`.
  949. .. versionadded:: 1.0.0
  950. .. seealso::
  951. :ref:`mssql_large_type_deprecation`
  952. """
  953. __visit_name__ = "VARBINARY"
  954. class IMAGE(sqltypes.LargeBinary):
  955. __visit_name__ = "IMAGE"
  956. class XML(sqltypes.Text):
  957. """MSSQL XML type.
  958. This is a placeholder type for reflection purposes that does not include
  959. any Python-side datatype support. It also does not currently support
  960. additional arguments, such as "CONTENT", "DOCUMENT",
  961. "xml_schema_collection".
  962. .. versionadded:: 1.1.11
  963. """
  964. __visit_name__ = "XML"
  965. class BIT(sqltypes.Boolean):
  966. """MSSQL BIT type.
  967. Both pyodbc and pymssql return values from BIT columns as
  968. Python <class 'bool'> so just subclass Boolean.
  969. """
  970. __visit_name__ = "BIT"
  971. class MONEY(sqltypes.TypeEngine):
  972. __visit_name__ = "MONEY"
  973. class SMALLMONEY(sqltypes.TypeEngine):
  974. __visit_name__ = "SMALLMONEY"
  975. class UNIQUEIDENTIFIER(sqltypes.TypeEngine):
  976. __visit_name__ = "UNIQUEIDENTIFIER"
  977. class SQL_VARIANT(sqltypes.TypeEngine):
  978. __visit_name__ = "SQL_VARIANT"
  979. class TryCast(sql.elements.Cast):
  980. """Represent a SQL Server TRY_CAST expression."""
  981. __visit_name__ = "try_cast"
  982. stringify_dialect = "mssql"
  983. def __init__(self, *arg, **kw):
  984. """Create a TRY_CAST expression.
  985. :class:`.TryCast` is a subclass of SQLAlchemy's :class:`.Cast`
  986. construct, and works in the same way, except that the SQL expression
  987. rendered is "TRY_CAST" rather than "CAST"::
  988. from sqlalchemy import select
  989. from sqlalchemy import Numeric
  990. from sqlalchemy.dialects.mssql import try_cast
  991. stmt = select(
  992. try_cast(product_table.c.unit_price, Numeric(10, 4))
  993. )
  994. The above would render::
  995. SELECT TRY_CAST (product_table.unit_price AS NUMERIC(10, 4))
  996. FROM product_table
  997. .. versionadded:: 1.3.7
  998. """
  999. super(TryCast, self).__init__(*arg, **kw)
  1000. try_cast = public_factory(TryCast, ".dialects.mssql.try_cast")
  1001. # old names.
  1002. MSDateTime = _MSDateTime
  1003. MSDate = _MSDate
  1004. MSReal = REAL
  1005. MSTinyInteger = TINYINT
  1006. MSTime = TIME
  1007. MSSmallDateTime = SMALLDATETIME
  1008. MSDateTime2 = DATETIME2
  1009. MSDateTimeOffset = DATETIMEOFFSET
  1010. MSText = TEXT
  1011. MSNText = NTEXT
  1012. MSString = VARCHAR
  1013. MSNVarchar = NVARCHAR
  1014. MSChar = CHAR
  1015. MSNChar = NCHAR
  1016. MSBinary = BINARY
  1017. MSVarBinary = VARBINARY
  1018. MSImage = IMAGE
  1019. MSBit = BIT
  1020. MSMoney = MONEY
  1021. MSSmallMoney = SMALLMONEY
  1022. MSUniqueIdentifier = UNIQUEIDENTIFIER
  1023. MSVariant = SQL_VARIANT
  1024. ischema_names = {
  1025. "int": INTEGER,
  1026. "bigint": BIGINT,
  1027. "smallint": SMALLINT,
  1028. "tinyint": TINYINT,
  1029. "varchar": VARCHAR,
  1030. "nvarchar": NVARCHAR,
  1031. "char": CHAR,
  1032. "nchar": NCHAR,
  1033. "text": TEXT,
  1034. "ntext": NTEXT,
  1035. "decimal": DECIMAL,
  1036. "numeric": NUMERIC,
  1037. "float": FLOAT,
  1038. "datetime": DATETIME,
  1039. "datetime2": DATETIME2,
  1040. "datetimeoffset": DATETIMEOFFSET,
  1041. "date": DATE,
  1042. "time": TIME,
  1043. "smalldatetime": SMALLDATETIME,
  1044. "binary": BINARY,
  1045. "varbinary": VARBINARY,
  1046. "bit": BIT,
  1047. "real": REAL,
  1048. "image": IMAGE,
  1049. "xml": XML,
  1050. "timestamp": TIMESTAMP,
  1051. "money": MONEY,
  1052. "smallmoney": SMALLMONEY,
  1053. "uniqueidentifier": UNIQUEIDENTIFIER,
  1054. "sql_variant": SQL_VARIANT,
  1055. }
  1056. class MSTypeCompiler(compiler.GenericTypeCompiler):
  1057. def _extend(self, spec, type_, length=None):
  1058. """Extend a string-type declaration with standard SQL
  1059. COLLATE annotations.
  1060. """
  1061. if getattr(type_, "collation", None):
  1062. collation = "COLLATE %s" % type_.collation
  1063. else:
  1064. collation = None
  1065. if not length:
  1066. length = type_.length
  1067. if length:
  1068. spec = spec + "(%s)" % length
  1069. return " ".join([c for c in (spec, collation) if c is not None])
  1070. def visit_FLOAT(self, type_, **kw):
  1071. precision = getattr(type_, "precision", None)
  1072. if precision is None:
  1073. return "FLOAT"
  1074. else:
  1075. return "FLOAT(%(precision)s)" % {"precision": precision}
  1076. def visit_TINYINT(self, type_, **kw):
  1077. return "TINYINT"
  1078. def visit_TIME(self, type_, **kw):
  1079. precision = getattr(type_, "precision", None)
  1080. if precision is not None:
  1081. return "TIME(%s)" % precision
  1082. else:
  1083. return "TIME"
  1084. def visit_TIMESTAMP(self, type_, **kw):
  1085. return "TIMESTAMP"
  1086. def visit_ROWVERSION(self, type_, **kw):
  1087. return "ROWVERSION"
  1088. def visit_datetime(self, type_, **kw):
  1089. if type_.timezone:
  1090. return self.visit_DATETIMEOFFSET(type_, **kw)
  1091. else:
  1092. return self.visit_DATETIME(type_, **kw)
  1093. def visit_DATETIMEOFFSET(self, type_, **kw):
  1094. precision = getattr(type_, "precision", None)
  1095. if precision is not None:
  1096. return "DATETIMEOFFSET(%s)" % type_.precision
  1097. else:
  1098. return "DATETIMEOFFSET"
  1099. def visit_DATETIME2(self, type_, **kw):
  1100. precision = getattr(type_, "precision", None)
  1101. if precision is not None:
  1102. return "DATETIME2(%s)" % precision
  1103. else:
  1104. return "DATETIME2"
  1105. def visit_SMALLDATETIME(self, type_, **kw):
  1106. return "SMALLDATETIME"
  1107. def visit_unicode(self, type_, **kw):
  1108. return self.visit_NVARCHAR(type_, **kw)
  1109. def visit_text(self, type_, **kw):
  1110. if self.dialect.deprecate_large_types:
  1111. return self.visit_VARCHAR(type_, **kw)
  1112. else:
  1113. return self.visit_TEXT(type_, **kw)
  1114. def visit_unicode_text(self, type_, **kw):
  1115. if self.dialect.deprecate_large_types:
  1116. return self.visit_NVARCHAR(type_, **kw)
  1117. else:
  1118. return self.visit_NTEXT(type_, **kw)
  1119. def visit_NTEXT(self, type_, **kw):
  1120. return self._extend("NTEXT", type_)
  1121. def visit_TEXT(self, type_, **kw):
  1122. return self._extend("TEXT", type_)
  1123. def visit_VARCHAR(self, type_, **kw):
  1124. return self._extend("VARCHAR", type_, length=type_.length or "max")
  1125. def visit_CHAR(self, type_, **kw):
  1126. return self._extend("CHAR", type_)
  1127. def visit_NCHAR(self, type_, **kw):
  1128. return self._extend("NCHAR", type_)
  1129. def visit_NVARCHAR(self, type_, **kw):
  1130. return self._extend("NVARCHAR", type_, length=type_.length or "max")
  1131. def visit_date(self, type_, **kw):
  1132. if self.dialect.server_version_info < MS_2008_VERSION:
  1133. return self.visit_DATETIME(type_, **kw)
  1134. else:
  1135. return self.visit_DATE(type_, **kw)
  1136. def visit_time(self, type_, **kw):
  1137. if self.dialect.server_version_info < MS_2008_VERSION:
  1138. return self.visit_DATETIME(type_, **kw)
  1139. else:
  1140. return self.visit_TIME(type_, **kw)
  1141. def visit_large_binary(self, type_, **kw):
  1142. if self.dialect.deprecate_large_types:
  1143. return self.visit_VARBINARY(type_, **kw)
  1144. else:
  1145. return self.visit_IMAGE(type_, **kw)
  1146. def visit_IMAGE(self, type_, **kw):
  1147. return "IMAGE"
  1148. def visit_XML(self, type_, **kw):
  1149. return "XML"
  1150. def visit_VARBINARY(self, type_, **kw):
  1151. return self._extend("VARBINARY", type_, length=type_.length or "max")
  1152. def visit_boolean(self, type_, **kw):
  1153. return self.visit_BIT(type_)
  1154. def visit_BIT(self, type_, **kw):
  1155. return "BIT"
  1156. def visit_JSON(self, type_, **kw):
  1157. # this is a bit of a break with SQLAlchemy's convention of
  1158. # "UPPERCASE name goes to UPPERCASE type name with no modification"
  1159. return self._extend("NVARCHAR", type_, length="max")
  1160. def visit_MONEY(self, type_, **kw):
  1161. return "MONEY"
  1162. def visit_SMALLMONEY(self, type_, **kw):
  1163. return "SMALLMONEY"
  1164. def visit_UNIQUEIDENTIFIER(self, type_, **kw):
  1165. return "UNIQUEIDENTIFIER"
  1166. def visit_SQL_VARIANT(self, type_, **kw):
  1167. return "SQL_VARIANT"
  1168. class MSExecutionContext(default.DefaultExecutionContext):
  1169. _enable_identity_insert = False
  1170. _select_lastrowid = False
  1171. _lastrowid = None
  1172. _rowcount = None
  1173. _result_strategy = None
  1174. def _opt_encode(self, statement):
  1175. if not self.dialect.supports_unicode_statements:
  1176. encoded = self.dialect._encoder(statement)[0]
  1177. else:
  1178. encoded = statement
  1179. if self.compiled and self.compiled.schema_translate_map:
  1180. rst = self.compiled.preparer._render_schema_translates
  1181. encoded = rst(encoded, self.compiled.schema_translate_map)
  1182. return encoded
  1183. def pre_exec(self):
  1184. """Activate IDENTITY_INSERT if needed."""
  1185. if self.isinsert:
  1186. tbl = self.compiled.compile_state.dml_table
  1187. id_column = tbl._autoincrement_column
  1188. insert_has_identity = (id_column is not None) and (
  1189. not isinstance(id_column.default, Sequence)
  1190. )
  1191. if insert_has_identity:
  1192. compile_state = self.compiled.compile_state
  1193. self._enable_identity_insert = (
  1194. id_column.key in self.compiled_parameters[0]
  1195. ) or (
  1196. compile_state._dict_parameters
  1197. and (
  1198. id_column.key in compile_state._dict_parameters
  1199. or id_column in compile_state._dict_parameters
  1200. )
  1201. )
  1202. else:
  1203. self._enable_identity_insert = False
  1204. self._select_lastrowid = (
  1205. not self.compiled.inline
  1206. and insert_has_identity
  1207. and not self.compiled.returning
  1208. and not self._enable_identity_insert
  1209. and not self.executemany
  1210. )
  1211. if self._enable_identity_insert:
  1212. self.root_connection._cursor_execute(
  1213. self.cursor,
  1214. self._opt_encode(
  1215. "SET IDENTITY_INSERT %s ON"
  1216. % self.identifier_preparer.format_table(tbl)
  1217. ),
  1218. (),
  1219. self,
  1220. )
  1221. def post_exec(self):
  1222. """Disable IDENTITY_INSERT if enabled."""
  1223. conn = self.root_connection
  1224. if self.isinsert or self.isupdate or self.isdelete:
  1225. self._rowcount = self.cursor.rowcount
  1226. if self._select_lastrowid:
  1227. if self.dialect.use_scope_identity:
  1228. conn._cursor_execute(
  1229. self.cursor,
  1230. "SELECT scope_identity() AS lastrowid",
  1231. (),
  1232. self,
  1233. )
  1234. else:
  1235. conn._cursor_execute(
  1236. self.cursor, "SELECT @@identity AS lastrowid", (), self
  1237. )
  1238. # fetchall() ensures the cursor is consumed without closing it
  1239. row = self.cursor.fetchall()[0]
  1240. self._lastrowid = int(row[0])
  1241. elif (
  1242. self.isinsert or self.isupdate or self.isdelete
  1243. ) and self.compiled.returning:
  1244. self.cursor_fetch_strategy = (
  1245. _cursor.FullyBufferedCursorFetchStrategy(
  1246. self.cursor,
  1247. self.cursor.description,
  1248. self.cursor.fetchall(),
  1249. )
  1250. )
  1251. if self._enable_identity_insert:
  1252. conn._cursor_execute(
  1253. self.cursor,
  1254. self._opt_encode(
  1255. "SET IDENTITY_INSERT %s OFF"
  1256. % self.identifier_preparer.format_table(
  1257. self.compiled.compile_state.dml_table
  1258. )
  1259. ),
  1260. (),
  1261. self,
  1262. )
  1263. def get_lastrowid(self):
  1264. return self._lastrowid
  1265. @property
  1266. def rowcount(self):
  1267. if self._rowcount is not None:
  1268. return self._rowcount
  1269. else:
  1270. return self.cursor.rowcount
  1271. def handle_dbapi_exception(self, e):
  1272. if self._enable_identity_insert:
  1273. try:
  1274. self.cursor.execute(
  1275. self._opt_encode(
  1276. "SET IDENTITY_INSERT %s OFF"
  1277. % self.identifier_preparer.format_table(
  1278. self.compiled.compile_state.dml_table
  1279. )
  1280. )
  1281. )
  1282. except Exception:
  1283. pass
  1284. def get_result_cursor_strategy(self, result):
  1285. if self._result_strategy:
  1286. return self._result_strategy
  1287. else:
  1288. return super(MSExecutionContext, self).get_result_cursor_strategy(
  1289. result
  1290. )
  1291. def fire_sequence(self, seq, type_):
  1292. return self._execute_scalar(
  1293. (
  1294. "SELECT NEXT VALUE FOR %s"
  1295. % self.identifier_preparer.format_sequence(seq)
  1296. ),
  1297. type_,
  1298. )
  1299. def get_insert_default(self, column):
  1300. if (
  1301. isinstance(column, sa_schema.Column)
  1302. and column is column.table._autoincrement_column
  1303. and isinstance(column.default, sa_schema.Sequence)
  1304. and column.default.optional
  1305. ):
  1306. return None
  1307. return super(MSExecutionContext, self).get_insert_default(column)
  1308. class MSSQLCompiler(compiler.SQLCompiler):
  1309. returning_precedes_values = True
  1310. extract_map = util.update_copy(
  1311. compiler.SQLCompiler.extract_map,
  1312. {
  1313. "doy": "dayofyear",
  1314. "dow": "weekday",
  1315. "milliseconds": "millisecond",
  1316. "microseconds": "microsecond",
  1317. },
  1318. )
  1319. def __init__(self, *args, **kwargs):
  1320. self.tablealiases = {}
  1321. super(MSSQLCompiler, self).__init__(*args, **kwargs)
  1322. def _with_legacy_schema_aliasing(fn):
  1323. def decorate(self, *arg, **kw):
  1324. if self.dialect.legacy_schema_aliasing:
  1325. return fn(self, *arg, **kw)
  1326. else:
  1327. super_ = getattr(super(MSSQLCompiler, self), fn.__name__)
  1328. return super_(*arg, **kw)
  1329. return decorate
  1330. def visit_now_func(self, fn, **kw):
  1331. return "CURRENT_TIMESTAMP"
  1332. def visit_current_date_func(self, fn, **kw):
  1333. return "GETDATE()"
  1334. def visit_length_func(self, fn, **kw):
  1335. return "LEN%s" % self.function_argspec(fn, **kw)
  1336. def visit_char_length_func(self, fn, **kw):
  1337. return "LEN%s" % self.function_argspec(fn, **kw)
  1338. def visit_concat_op_binary(self, binary, operator, **kw):
  1339. return "%s + %s" % (
  1340. self.process(binary.left, **kw),
  1341. self.process(binary.right, **kw),
  1342. )
  1343. def visit_true(self, expr, **kw):
  1344. return "1"
  1345. def visit_false(self, expr, **kw):
  1346. return "0"
  1347. def visit_match_op_binary(self, binary, operator, **kw):
  1348. return "CONTAINS (%s, %s)" % (
  1349. self.process(binary.left, **kw),
  1350. self.process(binary.right, **kw),
  1351. )
  1352. def get_select_precolumns(self, select, **kw):
  1353. """MS-SQL puts TOP, it's version of LIMIT here"""
  1354. s = super(MSSQLCompiler, self).get_select_precolumns(select, **kw)
  1355. if select._has_row_limiting_clause and self._use_top(select):
  1356. # ODBC drivers and possibly others
  1357. # don't support bind params in the SELECT clause on SQL Server.
  1358. # so have to use literal here.
  1359. kw["literal_execute"] = True
  1360. s += "TOP %s " % self.process(
  1361. self._get_limit_or_fetch(select), **kw
  1362. )
  1363. if select._fetch_clause is not None:
  1364. if select._fetch_clause_options["percent"]:
  1365. s += "PERCENT "
  1366. if select._fetch_clause_options["with_ties"]:
  1367. s += "WITH TIES "
  1368. return s
  1369. def get_from_hint_text(self, table, text):
  1370. return text
  1371. def get_crud_hint_text(self, table, text):
  1372. return text
  1373. def _get_limit_or_fetch(self, select):
  1374. if select._fetch_clause is None:
  1375. return select._limit_clause
  1376. else:
  1377. return select._fetch_clause
  1378. def _use_top(self, select):
  1379. return (select._offset_clause is None) and (
  1380. select._simple_int_clause(select._limit_clause)
  1381. or (
  1382. # limit can use TOP with is by itself. fetch only uses TOP
  1383. # when it needs to because of PERCENT and/or WITH TIES
  1384. select._simple_int_clause(select._fetch_clause)
  1385. and (
  1386. select._fetch_clause_options["percent"]
  1387. or select._fetch_clause_options["with_ties"]
  1388. )
  1389. )
  1390. )
  1391. def fetch_clause(self, cs, **kwargs):
  1392. return ""
  1393. def limit_clause(self, cs, **kwargs):
  1394. return ""
  1395. def _check_can_use_fetch_limit(self, select):
  1396. # to use ROW_NUMBER(), an ORDER BY is required.
  1397. # OFFSET are FETCH are options of the ORDER BY clause
  1398. if not select._order_by_clause.clauses:
  1399. raise exc.CompileError(
  1400. "MSSQL requires an order_by when "
  1401. "using an OFFSET or a non-simple "
  1402. "LIMIT clause"
  1403. )
  1404. if select._fetch_clause_options is not None and (
  1405. select._fetch_clause_options["percent"]
  1406. or select._fetch_clause_options["with_ties"]
  1407. ):
  1408. raise exc.CompileError(
  1409. "MSSQL needs TOP to use PERCENT and/or WITH TIES. "
  1410. "Only simple fetch without offset can be used."
  1411. )
  1412. def _row_limit_clause(self, select, **kw):
  1413. """MSSQL 2012 supports OFFSET/FETCH operators
  1414. Use it instead subquery with row_number
  1415. """
  1416. if self.dialect._supports_offset_fetch and not self._use_top(select):
  1417. self._check_can_use_fetch_limit(select)
  1418. text = ""
  1419. if select._offset_clause is not None:
  1420. offset_str = self.process(select._offset_clause, **kw)
  1421. else:
  1422. offset_str = "0"
  1423. text += "\n OFFSET %s ROWS" % offset_str
  1424. limit = self._get_limit_or_fetch(select)
  1425. if limit is not None:
  1426. text += "\n FETCH FIRST %s ROWS ONLY" % self.process(
  1427. limit, **kw
  1428. )
  1429. return text
  1430. else:
  1431. return ""
  1432. def visit_try_cast(self, element, **kw):
  1433. return "TRY_CAST (%s AS %s)" % (
  1434. self.process(element.clause, **kw),
  1435. self.process(element.typeclause, **kw),
  1436. )
  1437. def translate_select_structure(self, select_stmt, **kwargs):
  1438. """Look for ``LIMIT`` and OFFSET in a select statement, and if
  1439. so tries to wrap it in a subquery with ``row_number()`` criterion.
  1440. MSSQL 2012 and above are excluded
  1441. """
  1442. select = select_stmt
  1443. if (
  1444. select._has_row_limiting_clause
  1445. and not self.dialect._supports_offset_fetch
  1446. and not self._use_top(select)
  1447. and not getattr(select, "_mssql_visit", None)
  1448. ):
  1449. self._check_can_use_fetch_limit(select)
  1450. _order_by_clauses = [
  1451. sql_util.unwrap_label_reference(elem)
  1452. for elem in select._order_by_clause.clauses
  1453. ]
  1454. limit_clause = self._get_limit_or_fetch(select)
  1455. offset_clause = select._offset_clause
  1456. select = select._generate()
  1457. select._mssql_visit = True
  1458. select = (
  1459. select.add_columns(
  1460. sql.func.ROW_NUMBER()
  1461. .over(order_by=_order_by_clauses)
  1462. .label("mssql_rn")
  1463. )
  1464. .order_by(None)
  1465. .alias()
  1466. )
  1467. mssql_rn = sql.column("mssql_rn")
  1468. limitselect = sql.select(
  1469. *[c for c in select.c if c.key != "mssql_rn"]
  1470. )
  1471. if offset_clause is not None:
  1472. limitselect = limitselect.where(mssql_rn > offset_clause)
  1473. if limit_clause is not None:
  1474. limitselect = limitselect.where(
  1475. mssql_rn <= (limit_clause + offset_clause)
  1476. )
  1477. else:
  1478. limitselect = limitselect.where(mssql_rn <= (limit_clause))
  1479. return limitselect
  1480. else:
  1481. return select
  1482. @_with_legacy_schema_aliasing
  1483. def visit_table(self, table, mssql_aliased=False, iscrud=False, **kwargs):
  1484. if mssql_aliased is table or iscrud:
  1485. return super(MSSQLCompiler, self).visit_table(table, **kwargs)
  1486. # alias schema-qualified tables
  1487. alias = self._schema_aliased_table(table)
  1488. if alias is not None:
  1489. return self.process(alias, mssql_aliased=table, **kwargs)
  1490. else:
  1491. return super(MSSQLCompiler, self).visit_table(table, **kwargs)
  1492. @_with_legacy_schema_aliasing
  1493. def visit_alias(self, alias, **kw):
  1494. # translate for schema-qualified table aliases
  1495. kw["mssql_aliased"] = alias.element
  1496. return super(MSSQLCompiler, self).visit_alias(alias, **kw)
  1497. @_with_legacy_schema_aliasing
  1498. def visit_column(self, column, add_to_result_map=None, **kw):
  1499. if (
  1500. column.table is not None
  1501. and (not self.isupdate and not self.isdelete)
  1502. or self.is_subquery()
  1503. ):
  1504. # translate for schema-qualified table aliases
  1505. t = self._schema_aliased_table(column.table)
  1506. if t is not None:
  1507. converted = elements._corresponding_column_or_error(t, column)
  1508. if add_to_result_map is not None:
  1509. add_to_result_map(
  1510. column.name,
  1511. column.name,
  1512. (column, column.name, column.key),
  1513. column.type,
  1514. )
  1515. return super(MSSQLCompiler, self).visit_column(converted, **kw)
  1516. return super(MSSQLCompiler, self).visit_column(
  1517. column, add_to_result_map=add_to_result_map, **kw
  1518. )
  1519. def _schema_aliased_table(self, table):
  1520. if getattr(table, "schema", None) is not None:
  1521. if table not in self.tablealiases:
  1522. self.tablealiases[table] = table.alias()
  1523. return self.tablealiases[table]
  1524. else:
  1525. return None
  1526. def visit_extract(self, extract, **kw):
  1527. field = self.extract_map.get(extract.field, extract.field)
  1528. return "DATEPART(%s, %s)" % (field, self.process(extract.expr, **kw))
  1529. def visit_savepoint(self, savepoint_stmt):
  1530. return "SAVE TRANSACTION %s" % self.preparer.format_savepoint(
  1531. savepoint_stmt
  1532. )
  1533. def visit_rollback_to_savepoint(self, savepoint_stmt):
  1534. return "ROLLBACK TRANSACTION %s" % self.preparer.format_savepoint(
  1535. savepoint_stmt
  1536. )
  1537. def visit_binary(self, binary, **kwargs):
  1538. """Move bind parameters to the right-hand side of an operator, where
  1539. possible.
  1540. """
  1541. if (
  1542. isinstance(binary.left, expression.BindParameter)
  1543. and binary.operator == operator.eq
  1544. and not isinstance(binary.right, expression.BindParameter)
  1545. ):
  1546. return self.process(
  1547. expression.BinaryExpression(
  1548. binary.right, binary.left, binary.operator
  1549. ),
  1550. **kwargs
  1551. )
  1552. return super(MSSQLCompiler, self).visit_binary(binary, **kwargs)
  1553. def returning_clause(self, stmt, returning_cols):
  1554. # SQL server returning clause requires that the columns refer to
  1555. # the virtual table names "inserted" or "deleted". Here, we make
  1556. # a simple alias of our table with that name, and then adapt the
  1557. # columns we have from the list of RETURNING columns to that new name
  1558. # so that they render as "inserted.<colname>" / "deleted.<colname>".
  1559. if self.isinsert or self.isupdate:
  1560. target = stmt.table.alias("inserted")
  1561. else:
  1562. target = stmt.table.alias("deleted")
  1563. adapter = sql_util.ClauseAdapter(target)
  1564. # adapter.traverse() takes a column from our target table and returns
  1565. # the one that is linked to the "inserted" / "deleted" tables. So in
  1566. # order to retrieve these values back from the result (e.g. like
  1567. # row[column]), tell the compiler to also add the original unadapted
  1568. # column to the result map. Before #4877, these were (unknowingly)
  1569. # falling back using string name matching in the result set which
  1570. # necessarily used an expensive KeyError in order to match.
  1571. columns = [
  1572. self._label_select_column(
  1573. None,
  1574. adapter.traverse(c),
  1575. True,
  1576. False,
  1577. {"result_map_targets": (c,)},
  1578. )
  1579. for c in expression._select_iterables(returning_cols)
  1580. ]
  1581. return "OUTPUT " + ", ".join(columns)
  1582. def get_cte_preamble(self, recursive):
  1583. # SQL Server finds it too inconvenient to accept
  1584. # an entirely optional, SQL standard specified,
  1585. # "RECURSIVE" word with their "WITH",
  1586. # so here we go
  1587. return "WITH"
  1588. def label_select_column(self, select, column, asfrom):
  1589. if isinstance(column, expression.Function):
  1590. return column.label(None)
  1591. else:
  1592. return super(MSSQLCompiler, self).label_select_column(
  1593. select, column, asfrom
  1594. )
  1595. def for_update_clause(self, select, **kw):
  1596. # "FOR UPDATE" is only allowed on "DECLARE CURSOR" which
  1597. # SQLAlchemy doesn't use
  1598. return ""
  1599. def order_by_clause(self, select, **kw):
  1600. # MSSQL only allows ORDER BY in subqueries if there is a LIMIT
  1601. if (
  1602. self.is_subquery()
  1603. and not select._limit
  1604. and (
  1605. select._offset is None
  1606. or not self.dialect._supports_offset_fetch
  1607. )
  1608. ):
  1609. # avoid processing the order by clause if we won't end up
  1610. # using it, because we don't want all the bind params tacked
  1611. # onto the positional list if that is what the dbapi requires
  1612. return ""
  1613. order_by = self.process(select._order_by_clause, **kw)
  1614. if order_by:
  1615. return " ORDER BY " + order_by
  1616. else:
  1617. return ""
  1618. def update_from_clause(
  1619. self, update_stmt, from_table, extra_froms, from_hints, **kw
  1620. ):
  1621. """Render the UPDATE..FROM clause specific to MSSQL.
  1622. In MSSQL, if the UPDATE statement involves an alias of the table to
  1623. be updated, then the table itself must be added to the FROM list as
  1624. well. Otherwise, it is optional. Here, we add it regardless.
  1625. """
  1626. return "FROM " + ", ".join(
  1627. t._compiler_dispatch(self, asfrom=True, fromhints=from_hints, **kw)
  1628. for t in [from_table] + extra_froms
  1629. )
  1630. def delete_table_clause(self, delete_stmt, from_table, extra_froms):
  1631. """If we have extra froms make sure we render any alias as hint."""
  1632. ashint = False
  1633. if extra_froms:
  1634. ashint = True
  1635. return from_table._compiler_dispatch(
  1636. self, asfrom=True, iscrud=True, ashint=ashint
  1637. )
  1638. def delete_extra_from_clause(
  1639. self, delete_stmt, from_table, extra_froms, from_hints, **kw
  1640. ):
  1641. """Render the DELETE .. FROM clause specific to MSSQL.
  1642. Yes, it has the FROM keyword twice.
  1643. """
  1644. return "FROM " + ", ".join(
  1645. t._compiler_dispatch(self, asfrom=True, fromhints=from_hints, **kw)
  1646. for t in [from_table] + extra_froms
  1647. )
  1648. def visit_empty_set_expr(self, type_):
  1649. return "SELECT 1 WHERE 1!=1"
  1650. def visit_is_distinct_from_binary(self, binary, operator, **kw):
  1651. return "NOT EXISTS (SELECT %s INTERSECT SELECT %s)" % (
  1652. self.process(binary.left),
  1653. self.process(binary.right),
  1654. )
  1655. def visit_is_not_distinct_from_binary(self, binary, operator, **kw):
  1656. return "EXISTS (SELECT %s INTERSECT SELECT %s)" % (
  1657. self.process(binary.left),
  1658. self.process(binary.right),
  1659. )
  1660. def _render_json_extract_from_binary(self, binary, operator, **kw):
  1661. # note we are intentionally calling upon the process() calls in the
  1662. # order in which they appear in the SQL String as this is used
  1663. # by positional parameter rendering
  1664. if binary.type._type_affinity is sqltypes.JSON:
  1665. return "JSON_QUERY(%s, %s)" % (
  1666. self.process(binary.left, **kw),
  1667. self.process(binary.right, **kw),
  1668. )
  1669. # as with other dialects, start with an explicit test for NULL
  1670. case_expression = "CASE JSON_VALUE(%s, %s) WHEN NULL THEN NULL" % (
  1671. self.process(binary.left, **kw),
  1672. self.process(binary.right, **kw),
  1673. )
  1674. if binary.type._type_affinity is sqltypes.Integer:
  1675. type_expression = "ELSE CAST(JSON_VALUE(%s, %s) AS INTEGER)" % (
  1676. self.process(binary.left, **kw),
  1677. self.process(binary.right, **kw),
  1678. )
  1679. elif binary.type._type_affinity is sqltypes.Numeric:
  1680. type_expression = "ELSE CAST(JSON_VALUE(%s, %s) AS %s)" % (
  1681. self.process(binary.left, **kw),
  1682. self.process(binary.right, **kw),
  1683. "FLOAT"
  1684. if isinstance(binary.type, sqltypes.Float)
  1685. else "NUMERIC(%s, %s)"
  1686. % (binary.type.precision, binary.type.scale),
  1687. )
  1688. elif binary.type._type_affinity is sqltypes.Boolean:
  1689. # the NULL handling is particularly weird with boolean, so
  1690. # explicitly return numeric (BIT) constants
  1691. type_expression = (
  1692. "WHEN 'true' THEN 1 WHEN 'false' THEN 0 ELSE NULL"
  1693. )
  1694. elif binary.type._type_affinity is sqltypes.String:
  1695. # TODO: does this comment (from mysql) apply to here, too?
  1696. # this fails with a JSON value that's a four byte unicode
  1697. # string. SQLite has the same problem at the moment
  1698. type_expression = "ELSE JSON_VALUE(%s, %s)" % (
  1699. self.process(binary.left, **kw),
  1700. self.process(binary.right, **kw),
  1701. )
  1702. else:
  1703. # other affinity....this is not expected right now
  1704. type_expression = "ELSE JSON_QUERY(%s, %s)" % (
  1705. self.process(binary.left, **kw),
  1706. self.process(binary.right, **kw),
  1707. )
  1708. return case_expression + " " + type_expression + " END"
  1709. def visit_json_getitem_op_binary(self, binary, operator, **kw):
  1710. return self._render_json_extract_from_binary(binary, operator, **kw)
  1711. def visit_json_path_getitem_op_binary(self, binary, operator, **kw):
  1712. return self._render_json_extract_from_binary(binary, operator, **kw)
  1713. def visit_sequence(self, seq, **kw):
  1714. return "NEXT VALUE FOR %s" % self.preparer.format_sequence(seq)
  1715. class MSSQLStrictCompiler(MSSQLCompiler):
  1716. """A subclass of MSSQLCompiler which disables the usage of bind
  1717. parameters where not allowed natively by MS-SQL.
  1718. A dialect may use this compiler on a platform where native
  1719. binds are used.
  1720. """
  1721. ansi_bind_rules = True
  1722. def visit_in_op_binary(self, binary, operator, **kw):
  1723. kw["literal_execute"] = True
  1724. return "%s IN %s" % (
  1725. self.process(binary.left, **kw),
  1726. self.process(binary.right, **kw),
  1727. )
  1728. def visit_not_in_op_binary(self, binary, operator, **kw):
  1729. kw["literal_execute"] = True
  1730. return "%s NOT IN %s" % (
  1731. self.process(binary.left, **kw),
  1732. self.process(binary.right, **kw),
  1733. )
  1734. def render_literal_value(self, value, type_):
  1735. """
  1736. For date and datetime values, convert to a string
  1737. format acceptable to MSSQL. That seems to be the
  1738. so-called ODBC canonical date format which looks
  1739. like this:
  1740. yyyy-mm-dd hh:mi:ss.mmm(24h)
  1741. For other data types, call the base class implementation.
  1742. """
  1743. # datetime and date are both subclasses of datetime.date
  1744. if issubclass(type(value), datetime.date):
  1745. # SQL Server wants single quotes around the date string.
  1746. return "'" + str(value) + "'"
  1747. else:
  1748. return super(MSSQLStrictCompiler, self).render_literal_value(
  1749. value, type_
  1750. )
  1751. class MSDDLCompiler(compiler.DDLCompiler):
  1752. def get_column_specification(self, column, **kwargs):
  1753. colspec = self.preparer.format_column(column)
  1754. # type is not accepted in a computed column
  1755. if column.computed is not None:
  1756. colspec += " " + self.process(column.computed)
  1757. else:
  1758. colspec += " " + self.dialect.type_compiler.process(
  1759. column.type, type_expression=column
  1760. )
  1761. if column.nullable is not None:
  1762. if (
  1763. not column.nullable
  1764. or column.primary_key
  1765. or isinstance(column.default, sa_schema.Sequence)
  1766. or column.autoincrement is True
  1767. or column.identity
  1768. ):
  1769. colspec += " NOT NULL"
  1770. elif column.computed is None:
  1771. # don't specify "NULL" for computed columns
  1772. colspec += " NULL"
  1773. if column.table is None:
  1774. raise exc.CompileError(
  1775. "mssql requires Table-bound columns "
  1776. "in order to generate DDL"
  1777. )
  1778. d_opt = column.dialect_options["mssql"]
  1779. start = d_opt["identity_start"]
  1780. increment = d_opt["identity_increment"]
  1781. if start is not None or increment is not None:
  1782. if column.identity:
  1783. raise exc.CompileError(
  1784. "Cannot specify options 'mssql_identity_start' and/or "
  1785. "'mssql_identity_increment' while also using the "
  1786. "'Identity' construct."
  1787. )
  1788. util.warn_deprecated(
  1789. "The dialect options 'mssql_identity_start' and "
  1790. "'mssql_identity_increment' are deprecated. "
  1791. "Use the 'Identity' object instead.",
  1792. "1.4",
  1793. )
  1794. if column.identity:
  1795. colspec += self.process(column.identity, **kwargs)
  1796. elif (
  1797. column is column.table._autoincrement_column
  1798. or column.autoincrement is True
  1799. ) and (
  1800. not isinstance(column.default, Sequence) or column.default.optional
  1801. ):
  1802. colspec += self.process(Identity(start=start, increment=increment))
  1803. else:
  1804. default = self.get_column_default_string(column)
  1805. if default is not None:
  1806. colspec += " DEFAULT " + default
  1807. return colspec
  1808. def visit_create_index(self, create, include_schema=False):
  1809. index = create.element
  1810. self._verify_index_table(index)
  1811. preparer = self.preparer
  1812. text = "CREATE "
  1813. if index.unique:
  1814. text += "UNIQUE "
  1815. # handle clustering option
  1816. clustered = index.dialect_options["mssql"]["clustered"]
  1817. if clustered is not None:
  1818. if clustered:
  1819. text += "CLUSTERED "
  1820. else:
  1821. text += "NONCLUSTERED "
  1822. text += "INDEX %s ON %s (%s)" % (
  1823. self._prepared_index_name(index, include_schema=include_schema),
  1824. preparer.format_table(index.table),
  1825. ", ".join(
  1826. self.sql_compiler.process(
  1827. expr, include_table=False, literal_binds=True
  1828. )
  1829. for expr in index.expressions
  1830. ),
  1831. )
  1832. # handle other included columns
  1833. if index.dialect_options["mssql"]["include"]:
  1834. inclusions = [
  1835. index.table.c[col]
  1836. if isinstance(col, util.string_types)
  1837. else col
  1838. for col in index.dialect_options["mssql"]["include"]
  1839. ]
  1840. text += " INCLUDE (%s)" % ", ".join(
  1841. [preparer.quote(c.name) for c in inclusions]
  1842. )
  1843. whereclause = index.dialect_options["mssql"]["where"]
  1844. if whereclause is not None:
  1845. whereclause = coercions.expect(
  1846. roles.DDLExpressionRole, whereclause
  1847. )
  1848. where_compiled = self.sql_compiler.process(
  1849. whereclause, include_table=False, literal_binds=True
  1850. )
  1851. text += " WHERE " + where_compiled
  1852. return text
  1853. def visit_drop_index(self, drop):
  1854. return "\nDROP INDEX %s ON %s" % (
  1855. self._prepared_index_name(drop.element, include_schema=False),
  1856. self.preparer.format_table(drop.element.table),
  1857. )
  1858. def visit_primary_key_constraint(self, constraint):
  1859. if len(constraint) == 0:
  1860. return ""
  1861. text = ""
  1862. if constraint.name is not None:
  1863. text += "CONSTRAINT %s " % self.preparer.format_constraint(
  1864. constraint
  1865. )
  1866. text += "PRIMARY KEY "
  1867. clustered = constraint.dialect_options["mssql"]["clustered"]
  1868. if clustered is not None:
  1869. if clustered:
  1870. text += "CLUSTERED "
  1871. else:
  1872. text += "NONCLUSTERED "
  1873. text += "(%s)" % ", ".join(
  1874. self.preparer.quote(c.name) for c in constraint
  1875. )
  1876. text += self.define_constraint_deferrability(constraint)
  1877. return text
  1878. def visit_unique_constraint(self, constraint):
  1879. if len(constraint) == 0:
  1880. return ""
  1881. text = ""
  1882. if constraint.name is not None:
  1883. formatted_name = self.preparer.format_constraint(constraint)
  1884. if formatted_name is not None:
  1885. text += "CONSTRAINT %s " % formatted_name
  1886. text += "UNIQUE "
  1887. clustered = constraint.dialect_options["mssql"]["clustered"]
  1888. if clustered is not None:
  1889. if clustered:
  1890. text += "CLUSTERED "
  1891. else:
  1892. text += "NONCLUSTERED "
  1893. text += "(%s)" % ", ".join(
  1894. self.preparer.quote(c.name) for c in constraint
  1895. )
  1896. text += self.define_constraint_deferrability(constraint)
  1897. return text
  1898. def visit_computed_column(self, generated):
  1899. text = "AS (%s)" % self.sql_compiler.process(
  1900. generated.sqltext, include_table=False, literal_binds=True
  1901. )
  1902. # explicitly check for True|False since None means server default
  1903. if generated.persisted is True:
  1904. text += " PERSISTED"
  1905. return text
  1906. def visit_create_sequence(self, create, **kw):
  1907. prefix = None
  1908. if create.element.data_type is not None:
  1909. data_type = create.element.data_type
  1910. prefix = " AS %s" % self.type_compiler.process(data_type)
  1911. return super(MSDDLCompiler, self).visit_create_sequence(
  1912. create, prefix=prefix, **kw
  1913. )
  1914. def visit_identity_column(self, identity, **kw):
  1915. text = " IDENTITY"
  1916. if identity.start is not None or identity.increment is not None:
  1917. start = 1 if identity.start is None else identity.start
  1918. increment = 1 if identity.increment is None else identity.increment
  1919. text += "(%s,%s)" % (start, increment)
  1920. return text
  1921. class MSIdentifierPreparer(compiler.IdentifierPreparer):
  1922. reserved_words = RESERVED_WORDS
  1923. def __init__(self, dialect):
  1924. super(MSIdentifierPreparer, self).__init__(
  1925. dialect,
  1926. initial_quote="[",
  1927. final_quote="]",
  1928. quote_case_sensitive_collations=False,
  1929. )
  1930. def _escape_identifier(self, value):
  1931. return value.replace("]", "]]")
  1932. def _unescape_identifier(self, value):
  1933. return value.replace("]]", "]")
  1934. def quote_schema(self, schema, force=None):
  1935. """Prepare a quoted table and schema name."""
  1936. # need to re-implement the deprecation warning entirely
  1937. if force is not None:
  1938. # not using the util.deprecated_params() decorator in this
  1939. # case because of the additional function call overhead on this
  1940. # very performance-critical spot.
  1941. util.warn_deprecated(
  1942. "The IdentifierPreparer.quote_schema.force parameter is "
  1943. "deprecated and will be removed in a future release. This "
  1944. "flag has no effect on the behavior of the "
  1945. "IdentifierPreparer.quote method; please refer to "
  1946. "quoted_name().",
  1947. version="1.3",
  1948. )
  1949. dbname, owner = _schema_elements(schema)
  1950. if dbname:
  1951. result = "%s.%s" % (self.quote(dbname), self.quote(owner))
  1952. elif owner:
  1953. result = self.quote(owner)
  1954. else:
  1955. result = ""
  1956. return result
  1957. def _db_plus_owner_listing(fn):
  1958. def wrap(dialect, connection, schema=None, **kw):
  1959. dbname, owner = _owner_plus_db(dialect, schema)
  1960. return _switch_db(
  1961. dbname,
  1962. connection,
  1963. fn,
  1964. dialect,
  1965. connection,
  1966. dbname,
  1967. owner,
  1968. schema,
  1969. **kw
  1970. )
  1971. return update_wrapper(wrap, fn)
  1972. def _db_plus_owner(fn):
  1973. def wrap(dialect, connection, tablename, schema=None, **kw):
  1974. dbname, owner = _owner_plus_db(dialect, schema)
  1975. return _switch_db(
  1976. dbname,
  1977. connection,
  1978. fn,
  1979. dialect,
  1980. connection,
  1981. tablename,
  1982. dbname,
  1983. owner,
  1984. schema,
  1985. **kw
  1986. )
  1987. return update_wrapper(wrap, fn)
  1988. def _switch_db(dbname, connection, fn, *arg, **kw):
  1989. if dbname:
  1990. current_db = connection.exec_driver_sql("select db_name()").scalar()
  1991. if current_db != dbname:
  1992. connection.exec_driver_sql(
  1993. "use %s" % connection.dialect.identifier_preparer.quote(dbname)
  1994. )
  1995. try:
  1996. return fn(*arg, **kw)
  1997. finally:
  1998. if dbname and current_db != dbname:
  1999. connection.exec_driver_sql(
  2000. "use %s"
  2001. % connection.dialect.identifier_preparer.quote(current_db)
  2002. )
  2003. def _owner_plus_db(dialect, schema):
  2004. if not schema:
  2005. return None, dialect.default_schema_name
  2006. elif "." in schema:
  2007. return _schema_elements(schema)
  2008. else:
  2009. return None, schema
  2010. _memoized_schema = util.LRUCache()
  2011. def _schema_elements(schema):
  2012. if isinstance(schema, quoted_name) and schema.quote:
  2013. return None, schema
  2014. if schema in _memoized_schema:
  2015. return _memoized_schema[schema]
  2016. # tests for this function are in:
  2017. # test/dialect/mssql/test_reflection.py ->
  2018. # OwnerPlusDBTest.test_owner_database_pairs
  2019. # test/dialect/mssql/test_compiler.py -> test_force_schema_*
  2020. # test/dialect/mssql/test_compiler.py -> test_schema_many_tokens_*
  2021. #
  2022. push = []
  2023. symbol = ""
  2024. bracket = False
  2025. has_brackets = False
  2026. for token in re.split(r"(\[|\]|\.)", schema):
  2027. if not token:
  2028. continue
  2029. if token == "[":
  2030. bracket = True
  2031. has_brackets = True
  2032. elif token == "]":
  2033. bracket = False
  2034. elif not bracket and token == ".":
  2035. if has_brackets:
  2036. push.append("[%s]" % symbol)
  2037. else:
  2038. push.append(symbol)
  2039. symbol = ""
  2040. has_brackets = False
  2041. else:
  2042. symbol += token
  2043. if symbol:
  2044. push.append(symbol)
  2045. if len(push) > 1:
  2046. dbname, owner = ".".join(push[0:-1]), push[-1]
  2047. # test for internal brackets
  2048. if re.match(r".*\].*\[.*", dbname[1:-1]):
  2049. dbname = quoted_name(dbname, quote=False)
  2050. else:
  2051. dbname = dbname.lstrip("[").rstrip("]")
  2052. elif len(push):
  2053. dbname, owner = None, push[0]
  2054. else:
  2055. dbname, owner = None, None
  2056. _memoized_schema[schema] = dbname, owner
  2057. return dbname, owner
  2058. class MSDialect(default.DefaultDialect):
  2059. # will assume it's at least mssql2005
  2060. name = "mssql"
  2061. supports_statement_cache = True
  2062. supports_default_values = True
  2063. supports_empty_insert = False
  2064. execution_ctx_cls = MSExecutionContext
  2065. use_scope_identity = True
  2066. max_identifier_length = 128
  2067. schema_name = "dbo"
  2068. implicit_returning = True
  2069. full_returning = True
  2070. colspecs = {
  2071. sqltypes.DateTime: _MSDateTime,
  2072. sqltypes.Date: _MSDate,
  2073. sqltypes.JSON: JSON,
  2074. sqltypes.JSON.JSONIndexType: JSONIndexType,
  2075. sqltypes.JSON.JSONPathType: JSONPathType,
  2076. sqltypes.Time: TIME,
  2077. sqltypes.Unicode: _MSUnicode,
  2078. sqltypes.UnicodeText: _MSUnicodeText,
  2079. }
  2080. engine_config_types = default.DefaultDialect.engine_config_types.union(
  2081. {"legacy_schema_aliasing": util.asbool}
  2082. )
  2083. ischema_names = ischema_names
  2084. supports_sequences = True
  2085. sequences_optional = True
  2086. # T-SQL's actual default is -9223372036854775808
  2087. default_sequence_base = 1
  2088. supports_native_boolean = False
  2089. non_native_boolean_check_constraint = False
  2090. supports_unicode_binds = True
  2091. postfetch_lastrowid = True
  2092. _supports_offset_fetch = False
  2093. _supports_nvarchar_max = False
  2094. legacy_schema_aliasing = False
  2095. server_version_info = ()
  2096. statement_compiler = MSSQLCompiler
  2097. ddl_compiler = MSDDLCompiler
  2098. type_compiler = MSTypeCompiler
  2099. preparer = MSIdentifierPreparer
  2100. construct_arguments = [
  2101. (sa_schema.PrimaryKeyConstraint, {"clustered": None}),
  2102. (sa_schema.UniqueConstraint, {"clustered": None}),
  2103. (sa_schema.Index, {"clustered": None, "include": None, "where": None}),
  2104. (
  2105. sa_schema.Column,
  2106. {"identity_start": None, "identity_increment": None},
  2107. ),
  2108. ]
  2109. def __init__(
  2110. self,
  2111. query_timeout=None,
  2112. use_scope_identity=True,
  2113. schema_name="dbo",
  2114. isolation_level=None,
  2115. deprecate_large_types=None,
  2116. json_serializer=None,
  2117. json_deserializer=None,
  2118. legacy_schema_aliasing=None,
  2119. **opts
  2120. ):
  2121. self.query_timeout = int(query_timeout or 0)
  2122. self.schema_name = schema_name
  2123. self.use_scope_identity = use_scope_identity
  2124. self.deprecate_large_types = deprecate_large_types
  2125. if legacy_schema_aliasing is not None:
  2126. util.warn_deprecated(
  2127. "The legacy_schema_aliasing parameter is "
  2128. "deprecated and will be removed in a future release.",
  2129. "1.4",
  2130. )
  2131. self.legacy_schema_aliasing = legacy_schema_aliasing
  2132. super(MSDialect, self).__init__(**opts)
  2133. self.isolation_level = isolation_level
  2134. self._json_serializer = json_serializer
  2135. self._json_deserializer = json_deserializer
  2136. def do_savepoint(self, connection, name):
  2137. # give the DBAPI a push
  2138. connection.exec_driver_sql("IF @@TRANCOUNT = 0 BEGIN TRANSACTION")
  2139. super(MSDialect, self).do_savepoint(connection, name)
  2140. def do_release_savepoint(self, connection, name):
  2141. # SQL Server does not support RELEASE SAVEPOINT
  2142. pass
  2143. _isolation_lookup = set(
  2144. [
  2145. "SERIALIZABLE",
  2146. "READ UNCOMMITTED",
  2147. "READ COMMITTED",
  2148. "REPEATABLE READ",
  2149. "SNAPSHOT",
  2150. ]
  2151. )
  2152. def set_isolation_level(self, connection, level):
  2153. level = level.replace("_", " ")
  2154. if level not in self._isolation_lookup:
  2155. raise exc.ArgumentError(
  2156. "Invalid value '%s' for isolation_level. "
  2157. "Valid isolation levels for %s are %s"
  2158. % (level, self.name, ", ".join(self._isolation_lookup))
  2159. )
  2160. cursor = connection.cursor()
  2161. cursor.execute("SET TRANSACTION ISOLATION LEVEL %s" % level)
  2162. cursor.close()
  2163. if level == "SNAPSHOT":
  2164. connection.commit()
  2165. def get_isolation_level(self, connection):
  2166. last_error = None
  2167. views = ("sys.dm_exec_sessions", "sys.dm_pdw_nodes_exec_sessions")
  2168. for view in views:
  2169. cursor = connection.cursor()
  2170. try:
  2171. cursor.execute(
  2172. """
  2173. SELECT CASE transaction_isolation_level
  2174. WHEN 0 THEN NULL
  2175. WHEN 1 THEN 'READ UNCOMMITTED'
  2176. WHEN 2 THEN 'READ COMMITTED'
  2177. WHEN 3 THEN 'REPEATABLE READ'
  2178. WHEN 4 THEN 'SERIALIZABLE'
  2179. WHEN 5 THEN 'SNAPSHOT' END AS TRANSACTION_ISOLATION_LEVEL
  2180. FROM %s
  2181. where session_id = @@SPID
  2182. """
  2183. % view
  2184. )
  2185. val = cursor.fetchone()[0]
  2186. except self.dbapi.Error as err:
  2187. # Python3 scoping rules
  2188. last_error = err
  2189. continue
  2190. else:
  2191. return val.upper()
  2192. finally:
  2193. cursor.close()
  2194. else:
  2195. # note that the NotImplementedError is caught by
  2196. # DefaultDialect, so the warning here is all that displays
  2197. util.warn(
  2198. "Could not fetch transaction isolation level, "
  2199. "tried views: %s; final error was: %s" % (views, last_error)
  2200. )
  2201. raise NotImplementedError(
  2202. "Can't fetch isolation level on this particular "
  2203. "SQL Server version. tried views: %s; final error was: %s"
  2204. % (views, last_error)
  2205. )
  2206. def initialize(self, connection):
  2207. super(MSDialect, self).initialize(connection)
  2208. self._setup_version_attributes()
  2209. self._setup_supports_nvarchar_max(connection)
  2210. def on_connect(self):
  2211. if self.isolation_level is not None:
  2212. def connect(conn):
  2213. self.set_isolation_level(conn, self.isolation_level)
  2214. return connect
  2215. else:
  2216. return None
  2217. def _setup_version_attributes(self):
  2218. if self.server_version_info[0] not in list(range(8, 17)):
  2219. util.warn(
  2220. "Unrecognized server version info '%s'. Some SQL Server "
  2221. "features may not function properly."
  2222. % ".".join(str(x) for x in self.server_version_info)
  2223. )
  2224. if self.server_version_info >= MS_2008_VERSION:
  2225. self.supports_multivalues_insert = True
  2226. if self.deprecate_large_types is None:
  2227. self.deprecate_large_types = (
  2228. self.server_version_info >= MS_2012_VERSION
  2229. )
  2230. self._supports_offset_fetch = (
  2231. self.server_version_info and self.server_version_info[0] >= 11
  2232. )
  2233. def _setup_supports_nvarchar_max(self, connection):
  2234. try:
  2235. connection.scalar(
  2236. sql.text("SELECT CAST('test max support' AS NVARCHAR(max))")
  2237. )
  2238. except exc.DBAPIError:
  2239. self._supports_nvarchar_max = False
  2240. else:
  2241. self._supports_nvarchar_max = True
  2242. def _get_default_schema_name(self, connection):
  2243. query = sql.text("SELECT schema_name()")
  2244. default_schema_name = connection.scalar(query)
  2245. if default_schema_name is not None:
  2246. # guard against the case where the default_schema_name is being
  2247. # fed back into a table reflection function.
  2248. return quoted_name(default_schema_name, quote=True)
  2249. else:
  2250. return self.schema_name
  2251. @_db_plus_owner
  2252. def has_table(self, connection, tablename, dbname, owner, schema):
  2253. self._ensure_has_table_connection(connection)
  2254. if tablename.startswith("#"): # temporary table
  2255. tables = ischema.mssql_temp_table_columns
  2256. s = sql.select(tables.c.table_name).where(
  2257. tables.c.table_name.like(
  2258. self._temp_table_name_like_pattern(tablename)
  2259. )
  2260. )
  2261. result = connection.execute(s.limit(1))
  2262. return result.scalar() is not None
  2263. else:
  2264. tables = ischema.tables
  2265. s = sql.select(tables.c.table_name).where(
  2266. sql.and_(
  2267. tables.c.table_type == "BASE TABLE",
  2268. tables.c.table_name == tablename,
  2269. )
  2270. )
  2271. if owner:
  2272. s = s.where(tables.c.table_schema == owner)
  2273. c = connection.execute(s)
  2274. return c.first() is not None
  2275. @_db_plus_owner
  2276. def has_sequence(self, connection, sequencename, dbname, owner, schema):
  2277. sequences = ischema.sequences
  2278. s = sql.select(sequences.c.sequence_name).where(
  2279. sequences.c.sequence_name == sequencename
  2280. )
  2281. if owner:
  2282. s = s.where(sequences.c.sequence_schema == owner)
  2283. c = connection.execute(s)
  2284. return c.first() is not None
  2285. @reflection.cache
  2286. @_db_plus_owner_listing
  2287. def get_sequence_names(self, connection, dbname, owner, schema, **kw):
  2288. sequences = ischema.sequences
  2289. s = sql.select(sequences.c.sequence_name)
  2290. if owner:
  2291. s = s.where(sequences.c.sequence_schema == owner)
  2292. c = connection.execute(s)
  2293. return [row[0] for row in c]
  2294. @reflection.cache
  2295. def get_schema_names(self, connection, **kw):
  2296. s = sql.select(ischema.schemata.c.schema_name).order_by(
  2297. ischema.schemata.c.schema_name
  2298. )
  2299. schema_names = [r[0] for r in connection.execute(s)]
  2300. return schema_names
  2301. @reflection.cache
  2302. @_db_plus_owner_listing
  2303. def get_table_names(self, connection, dbname, owner, schema, **kw):
  2304. tables = ischema.tables
  2305. s = (
  2306. sql.select(tables.c.table_name)
  2307. .where(
  2308. sql.and_(
  2309. tables.c.table_schema == owner,
  2310. tables.c.table_type == "BASE TABLE",
  2311. )
  2312. )
  2313. .order_by(tables.c.table_name)
  2314. )
  2315. table_names = [r[0] for r in connection.execute(s)]
  2316. return table_names
  2317. @reflection.cache
  2318. @_db_plus_owner_listing
  2319. def get_view_names(self, connection, dbname, owner, schema, **kw):
  2320. tables = ischema.tables
  2321. s = (
  2322. sql.select(tables.c.table_name)
  2323. .where(
  2324. sql.and_(
  2325. tables.c.table_schema == owner,
  2326. tables.c.table_type == "VIEW",
  2327. )
  2328. )
  2329. .order_by(tables.c.table_name)
  2330. )
  2331. view_names = [r[0] for r in connection.execute(s)]
  2332. return view_names
  2333. @reflection.cache
  2334. @_db_plus_owner
  2335. def get_indexes(self, connection, tablename, dbname, owner, schema, **kw):
  2336. filter_definition = (
  2337. "ind.filter_definition"
  2338. if self.server_version_info >= MS_2008_VERSION
  2339. else "NULL as filter_definition"
  2340. )
  2341. rp = connection.execution_options(future_result=True).execute(
  2342. sql.text(
  2343. "select ind.index_id, ind.is_unique, ind.name, "
  2344. "%s "
  2345. "from sys.indexes as ind join sys.tables as tab on "
  2346. "ind.object_id=tab.object_id "
  2347. "join sys.schemas as sch on sch.schema_id=tab.schema_id "
  2348. "where tab.name = :tabname "
  2349. "and sch.name=:schname "
  2350. "and ind.is_primary_key=0 and ind.type != 0"
  2351. % filter_definition
  2352. )
  2353. .bindparams(
  2354. sql.bindparam("tabname", tablename, ischema.CoerceUnicode()),
  2355. sql.bindparam("schname", owner, ischema.CoerceUnicode()),
  2356. )
  2357. .columns(name=sqltypes.Unicode())
  2358. )
  2359. indexes = {}
  2360. for row in rp.mappings():
  2361. indexes[row["index_id"]] = {
  2362. "name": row["name"],
  2363. "unique": row["is_unique"] == 1,
  2364. "column_names": [],
  2365. "include_columns": [],
  2366. }
  2367. if row["filter_definition"] is not None:
  2368. indexes[row["index_id"]].setdefault("dialect_options", {})[
  2369. "mssql_where"
  2370. ] = row["filter_definition"]
  2371. rp = connection.execution_options(future_result=True).execute(
  2372. sql.text(
  2373. "select ind_col.index_id, ind_col.object_id, col.name, "
  2374. "ind_col.is_included_column "
  2375. "from sys.columns as col "
  2376. "join sys.tables as tab on tab.object_id=col.object_id "
  2377. "join sys.index_columns as ind_col on "
  2378. "(ind_col.column_id=col.column_id and "
  2379. "ind_col.object_id=tab.object_id) "
  2380. "join sys.schemas as sch on sch.schema_id=tab.schema_id "
  2381. "where tab.name=:tabname "
  2382. "and sch.name=:schname"
  2383. )
  2384. .bindparams(
  2385. sql.bindparam("tabname", tablename, ischema.CoerceUnicode()),
  2386. sql.bindparam("schname", owner, ischema.CoerceUnicode()),
  2387. )
  2388. .columns(name=sqltypes.Unicode())
  2389. )
  2390. for row in rp.mappings():
  2391. if row["index_id"] in indexes:
  2392. if row["is_included_column"]:
  2393. indexes[row["index_id"]]["include_columns"].append(
  2394. row["name"]
  2395. )
  2396. else:
  2397. indexes[row["index_id"]]["column_names"].append(
  2398. row["name"]
  2399. )
  2400. return list(indexes.values())
  2401. @reflection.cache
  2402. @_db_plus_owner
  2403. def get_view_definition(
  2404. self, connection, viewname, dbname, owner, schema, **kw
  2405. ):
  2406. rp = connection.execute(
  2407. sql.text(
  2408. "select definition from sys.sql_modules as mod, "
  2409. "sys.views as views, "
  2410. "sys.schemas as sch"
  2411. " where "
  2412. "mod.object_id=views.object_id and "
  2413. "views.schema_id=sch.schema_id and "
  2414. "views.name=:viewname and sch.name=:schname"
  2415. ).bindparams(
  2416. sql.bindparam("viewname", viewname, ischema.CoerceUnicode()),
  2417. sql.bindparam("schname", owner, ischema.CoerceUnicode()),
  2418. )
  2419. )
  2420. if rp:
  2421. view_def = rp.scalar()
  2422. return view_def
  2423. def _temp_table_name_like_pattern(self, tablename):
  2424. return tablename + (("___%") if not tablename.startswith("##") else "")
  2425. def _get_internal_temp_table_name(self, connection, tablename):
  2426. # it's likely that schema is always "dbo", but since we can
  2427. # get it here, let's get it.
  2428. # see https://stackoverflow.com/questions/8311959/
  2429. # specifying-schema-for-temporary-tables
  2430. try:
  2431. return connection.execute(
  2432. sql.text(
  2433. "select table_schema, table_name "
  2434. "from tempdb.information_schema.tables "
  2435. "where table_name like :p1"
  2436. ),
  2437. {"p1": self._temp_table_name_like_pattern(tablename)},
  2438. ).one()
  2439. except exc.MultipleResultsFound as me:
  2440. util.raise_(
  2441. exc.UnreflectableTableError(
  2442. "Found more than one temporary table named '%s' in tempdb "
  2443. "at this time. Cannot reliably resolve that name to its "
  2444. "internal table name." % tablename
  2445. ),
  2446. replace_context=me,
  2447. )
  2448. except exc.NoResultFound as ne:
  2449. util.raise_(
  2450. exc.NoSuchTableError(
  2451. "Unable to find a temporary table named '%s' in tempdb."
  2452. % tablename
  2453. ),
  2454. replace_context=ne,
  2455. )
  2456. @reflection.cache
  2457. @_db_plus_owner
  2458. def get_columns(self, connection, tablename, dbname, owner, schema, **kw):
  2459. is_temp_table = tablename.startswith("#")
  2460. if is_temp_table:
  2461. owner, tablename = self._get_internal_temp_table_name(
  2462. connection, tablename
  2463. )
  2464. columns = ischema.mssql_temp_table_columns
  2465. else:
  2466. columns = ischema.columns
  2467. computed_cols = ischema.computed_columns
  2468. identity_cols = ischema.identity_columns
  2469. if owner:
  2470. whereclause = sql.and_(
  2471. columns.c.table_name == tablename,
  2472. columns.c.table_schema == owner,
  2473. )
  2474. full_name = columns.c.table_schema + "." + columns.c.table_name
  2475. else:
  2476. whereclause = columns.c.table_name == tablename
  2477. full_name = columns.c.table_name
  2478. join = columns.join(
  2479. computed_cols,
  2480. onclause=sql.and_(
  2481. computed_cols.c.object_id == func.object_id(full_name),
  2482. computed_cols.c.name == columns.c.column_name,
  2483. ),
  2484. isouter=True,
  2485. ).join(
  2486. identity_cols,
  2487. onclause=sql.and_(
  2488. identity_cols.c.object_id == func.object_id(full_name),
  2489. identity_cols.c.name == columns.c.column_name,
  2490. ),
  2491. isouter=True,
  2492. )
  2493. if self._supports_nvarchar_max:
  2494. computed_definition = computed_cols.c.definition
  2495. else:
  2496. # tds_version 4.2 does not support NVARCHAR(MAX)
  2497. computed_definition = sql.cast(
  2498. computed_cols.c.definition, NVARCHAR(4000)
  2499. )
  2500. s = (
  2501. sql.select(
  2502. columns,
  2503. computed_definition,
  2504. computed_cols.c.is_persisted,
  2505. identity_cols.c.is_identity,
  2506. identity_cols.c.seed_value,
  2507. identity_cols.c.increment_value,
  2508. )
  2509. .where(whereclause)
  2510. .select_from(join)
  2511. .order_by(columns.c.ordinal_position)
  2512. )
  2513. c = connection.execution_options(future_result=True).execute(s)
  2514. cols = []
  2515. for row in c.mappings():
  2516. name = row[columns.c.column_name]
  2517. type_ = row[columns.c.data_type]
  2518. nullable = row[columns.c.is_nullable] == "YES"
  2519. charlen = row[columns.c.character_maximum_length]
  2520. numericprec = row[columns.c.numeric_precision]
  2521. numericscale = row[columns.c.numeric_scale]
  2522. default = row[columns.c.column_default]
  2523. collation = row[columns.c.collation_name]
  2524. definition = row[computed_definition]
  2525. is_persisted = row[computed_cols.c.is_persisted]
  2526. is_identity = row[identity_cols.c.is_identity]
  2527. identity_start = row[identity_cols.c.seed_value]
  2528. identity_increment = row[identity_cols.c.increment_value]
  2529. coltype = self.ischema_names.get(type_, None)
  2530. kwargs = {}
  2531. if coltype in (
  2532. MSString,
  2533. MSChar,
  2534. MSNVarchar,
  2535. MSNChar,
  2536. MSText,
  2537. MSNText,
  2538. MSBinary,
  2539. MSVarBinary,
  2540. sqltypes.LargeBinary,
  2541. ):
  2542. if charlen == -1:
  2543. charlen = None
  2544. kwargs["length"] = charlen
  2545. if collation:
  2546. kwargs["collation"] = collation
  2547. if coltype is None:
  2548. util.warn(
  2549. "Did not recognize type '%s' of column '%s'"
  2550. % (type_, name)
  2551. )
  2552. coltype = sqltypes.NULLTYPE
  2553. else:
  2554. if issubclass(coltype, sqltypes.Numeric):
  2555. kwargs["precision"] = numericprec
  2556. if not issubclass(coltype, sqltypes.Float):
  2557. kwargs["scale"] = numericscale
  2558. coltype = coltype(**kwargs)
  2559. cdict = {
  2560. "name": name,
  2561. "type": coltype,
  2562. "nullable": nullable,
  2563. "default": default,
  2564. "autoincrement": is_identity is not None,
  2565. }
  2566. if definition is not None and is_persisted is not None:
  2567. cdict["computed"] = {
  2568. "sqltext": definition,
  2569. "persisted": is_persisted,
  2570. }
  2571. if is_identity is not None:
  2572. # identity_start and identity_increment are Decimal or None
  2573. if identity_start is None or identity_increment is None:
  2574. cdict["identity"] = {}
  2575. else:
  2576. if isinstance(coltype, sqltypes.BigInteger):
  2577. start = compat.long_type(identity_start)
  2578. increment = compat.long_type(identity_increment)
  2579. elif isinstance(coltype, sqltypes.Integer):
  2580. start = int(identity_start)
  2581. increment = int(identity_increment)
  2582. else:
  2583. start = identity_start
  2584. increment = identity_increment
  2585. cdict["identity"] = {
  2586. "start": start,
  2587. "increment": increment,
  2588. }
  2589. cols.append(cdict)
  2590. return cols
  2591. @reflection.cache
  2592. @_db_plus_owner
  2593. def get_pk_constraint(
  2594. self, connection, tablename, dbname, owner, schema, **kw
  2595. ):
  2596. pkeys = []
  2597. TC = ischema.constraints
  2598. C = ischema.key_constraints.alias("C")
  2599. # Primary key constraints
  2600. s = (
  2601. sql.select(
  2602. C.c.column_name, TC.c.constraint_type, C.c.constraint_name
  2603. )
  2604. .where(
  2605. sql.and_(
  2606. TC.c.constraint_name == C.c.constraint_name,
  2607. TC.c.table_schema == C.c.table_schema,
  2608. C.c.table_name == tablename,
  2609. C.c.table_schema == owner,
  2610. ),
  2611. )
  2612. .order_by(TC.c.constraint_name, C.c.ordinal_position)
  2613. )
  2614. c = connection.execution_options(future_result=True).execute(s)
  2615. constraint_name = None
  2616. for row in c.mappings():
  2617. if "PRIMARY" in row[TC.c.constraint_type.name]:
  2618. pkeys.append(row["COLUMN_NAME"])
  2619. if constraint_name is None:
  2620. constraint_name = row[C.c.constraint_name.name]
  2621. return {"constrained_columns": pkeys, "name": constraint_name}
  2622. @reflection.cache
  2623. @_db_plus_owner
  2624. def get_foreign_keys(
  2625. self, connection, tablename, dbname, owner, schema, **kw
  2626. ):
  2627. RR = ischema.ref_constraints
  2628. C = ischema.key_constraints.alias("C")
  2629. R = ischema.key_constraints.alias("R")
  2630. # Foreign key constraints
  2631. s = (
  2632. sql.select(
  2633. C.c.column_name,
  2634. R.c.table_schema,
  2635. R.c.table_name,
  2636. R.c.column_name,
  2637. RR.c.constraint_name,
  2638. RR.c.match_option,
  2639. RR.c.update_rule,
  2640. RR.c.delete_rule,
  2641. )
  2642. .where(
  2643. sql.and_(
  2644. C.c.table_name == tablename,
  2645. C.c.table_schema == owner,
  2646. RR.c.constraint_schema == C.c.table_schema,
  2647. C.c.constraint_name == RR.c.constraint_name,
  2648. R.c.constraint_name == RR.c.unique_constraint_name,
  2649. R.c.constraint_schema == RR.c.unique_constraint_schema,
  2650. C.c.ordinal_position == R.c.ordinal_position,
  2651. )
  2652. )
  2653. .order_by(RR.c.constraint_name, R.c.ordinal_position)
  2654. )
  2655. # group rows by constraint ID, to handle multi-column FKs
  2656. fkeys = []
  2657. def fkey_rec():
  2658. return {
  2659. "name": None,
  2660. "constrained_columns": [],
  2661. "referred_schema": None,
  2662. "referred_table": None,
  2663. "referred_columns": [],
  2664. }
  2665. fkeys = util.defaultdict(fkey_rec)
  2666. for r in connection.execute(s).fetchall():
  2667. scol, rschema, rtbl, rcol, rfknm, fkmatch, fkuprule, fkdelrule = r
  2668. rec = fkeys[rfknm]
  2669. rec["name"] = rfknm
  2670. if not rec["referred_table"]:
  2671. rec["referred_table"] = rtbl
  2672. if schema is not None or owner != rschema:
  2673. if dbname:
  2674. rschema = dbname + "." + rschema
  2675. rec["referred_schema"] = rschema
  2676. local_cols, remote_cols = (
  2677. rec["constrained_columns"],
  2678. rec["referred_columns"],
  2679. )
  2680. local_cols.append(scol)
  2681. remote_cols.append(rcol)
  2682. return list(fkeys.values())