5.4 TABLE

As a relational DBMS, Firebird stores data in tables. A table is a flat, two-dimensional structure containing any number of rows. Table rows are often called records.

All rows in a table have the same structure and consist of columns. Table columns are often called fields. A table must have at least one column. Each column contains a single type of SQL data.

This section describes how to create, alter and delete tables in a database.

5.4.1 CREATE TABLE

Used forcreating a new table (relation)

Available inDSQL, ESQL

Syntax

  1. CREATE [GLOBAL TEMPORARY] TABLE tablename
  2. [EXTERNAL [FILE] 'filespec']
  3. (<col_def> [, {<col_def> | <tconstraint>} ...])
  4. [{<table_attrs> | <gtt_table_attrs>}]
  5. <col_def> ::=
  6. <regular_col_def>
  7. | <computed_col_def>
  8. | <identity_col_def>
  9. <regular_col_def> ::=
  10. colname {<datatype> | domainname}
  11. [DEFAULT {<literal> | NULL | <context_var>}]
  12. [<col_constraint> ...]
  13. [COLLATE collation_name]
  14. <computed_col_def> ::=
  15. colname [{<datatype> | domainname}]
  16. {COMPUTED [BY] | GENERATED ALWAYS AS} (<expression>)
  17. <identity_col_def> ::=
  18. colname {<datatype> | domainname}
  19. GENERATED {ALWAYS | BY DEFAULT} AS IDENTITY
  20. [(<identity_col_option>...)]
  21. [<col_constraint> ...]
  22. <identity_col_option> ::=
  23. START WITH start_value
  24. | INCREMENT [BY] inc_value
  25. <datatype> ::=
  26. <scalar_datatype> | <blob_datatype> | <array_datatype>
  27. <scalar_datatype> ::=
  28. !! See Scalar Data Types Syntax !!
  29. <blob_datatype> ::=
  30. !! See BLOB Data Types Syntax !!
  31. <array_datatype> ::=
  32. !! See Array Data Types Syntax !!
  33. <col_constraint> ::=
  34. [CONSTRAINT constr_name]
  35. { PRIMARY KEY [<using_index>]
  36. | UNIQUE [<using_index>]
  37. | REFERENCES other_table [(colname)] [<using_index>]
  38. [ON DELETE {NO ACTION | CASCADE | SET DEFAULT | SET NULL}]
  39. [ON UPDATE {NO ACTION | CASCADE | SET DEFAULT | SET NULL}]
  40. | CHECK (<check_condition>)
  41. | NOT NULL }
  42. <tconstraint> ::=
  43. [CONSTRAINT constr_name]
  44. { PRIMARY KEY (<col_list>) [<using_index>]
  45. | UNIQUE (<col_list>) [<using_index>]
  46. | FOREIGN KEY (<col_list>)
  47. REFERENCES other_table [(<col_list>)] [<using_index>]
  48. [ON DELETE {NO ACTION | CASCADE | SET DEFAULT | SET NULL}]
  49. [ON UPDATE {NO ACTION | CASCADE | SET DEFAULT | SET NULL}]
  50. | CHECK (<check_condition>) }
  51. <col_list> ::= colname [, colname ...]
  52. <using_index> ::= USING
  53. [ASC[ENDING] | DESC[ENDING]] INDEX indexname
  54. <check_condition> ::=
  55. <val> <operator> <val>
  56. | <val> [NOT] BETWEEN <val> AND <val>
  57. | <val> [NOT] IN (<val> [, <val> ...] | <select_list>)
  58. | <val> IS [NOT] NULL
  59. | <val> IS [NOT] DISTINCT FROM <val>
  60. | <val> [NOT] CONTAINING <val>
  61. | <val> [NOT] STARTING [WITH] <val>
  62. | <val> [NOT] LIKE <val> [ESCAPE <val>]
  63. | <val> [NOT] SIMILAR TO <val> [ESCAPE <val>]
  64. | <val> <operator> {ALL | SOME | ANY} (<select_list>)
  65. | [NOT] EXISTS (<select_expr>)
  66. | [NOT] SINGULAR (<select_expr>)
  67. | (<check_condition>)
  68. | NOT <check_condition>
  69. | <check_condition> OR <check_condition>
  70. | <check_condition> AND <check_condition>
  71. <operator> ::=
  72. <> | != | ^= | ~= | = | < | > | <= | >=
  73. | !< | ^< | ~< | !> | ^> | ~>
  74. <val> ::=
  75. colname ['['array_idx [, array_idx ...]']']
  76. | <literal>
  77. | <context_var>
  78. | <expression>
  79. | NULL
  80. | NEXT VALUE FOR genname
  81. | GEN_ID(genname, <val>)
  82. | CAST(<val> AS <cast_type>)
  83. | (<select_one>)
  84. | func([<val> [, <val> ...]])
  85. <cast_type> ::= <domain_or_non_array_type> | <array_datatype>
  86. <domain_or_non_array_type> ::=
  87. !! See Scalar Data Types Syntax !!
  88. <table_attrs> ::= <table_attr> [<table_attr> ...]
  89. <table_attr> ::=
  90. <sql_security>
  91. | {ENABLE | DISABLE} PUBLICATION
  92. <sql_security> ::= SQL SECURITY {INVOKER | DEFINER}
  93. <gtt_table_attrs> ::= <gtt_table_attr> [gtt_table_attr> ...]
  94. <gtt_table_attr> ::=
  95. <sql_security>
  96. | ON COMMIT {DELETE | PRESERVE} ROWS

Table 5.4.1.1 CREATE TABLE Statement Parameters

ParameterDescription

tablename

Name (identifier) for the table. The maximum length is 63 characters and must be unique in the database.

filespec

File specification (only for external tables). Full file name and path, enclosed in single quotes, correct for the local file system and located on a storage device that is physically connected to Firebird’s host computer.

colname

Name (identifier) for a column in the table. The maximum length is 63 characters and must be unique in the table.

tconstraint

Table constraint

table_attrs

Attributes of a normal table

gtt_table_attrs

Attributes of a global temporary table

datatype

SQL data type

domain_name

Domain name

start_value

The initial value of the identity column

inc_value

The increment (or step) value of the identity column, default is 1; zero (0) is not allowed.

col_constraint

Column constraint

constr_name

The name (identifier) of a constraint. The maximum length is 63 characters.

other_table

The name of the table referenced by the foreign key constraint

other_col

The name of the column in other_table that is referenced by the foreign key

literal

A literal value that is allowed in the given context

context_var

Any context variable whose data type is allowed in the given context

check_condition

The condition applied to a CHECK constraint, that will resolve as either true, false or NULL

collation

Collation

select_one

A scalar SELECT statement — selecting one column and returning only one row

select_list

A SELECT statement selecting one column and returning zero or more rows

select_expr

A SELECT statement selecting one or more columns and returning zero or more rows

expression

An expression resolving to a value that is allowed in the given context

genname

Sequence (generator) name

func

Internal function or UDF

The CREATE TABLE statement creates a new table. Any user can create it and its name must be unique among the names of all tables, views and stored procedures in the database.

A table must contain at least one column that is not computed, and the names of columns must be unique in the table.

A column must have either an explicit SQL data type, the name of a domain whose attributes will be copied for the column, or be defined as COMPUTED BY an expression (a calculated field).

A table may have any number of table constraints, including none.

5.4.1.1 Character Columns

You can use the CHARACTER SET clause to specify the character set for the CHAR, VARCHAR and BLOB (text subtype) types. If the character set is not specified, the default character set of the database - at time of the creation of the column - will be used. If the database has no default character set, the NONE character set is applied. In this case, data is stored and retrieved the way it was submitted. Data in any encoding can be added to such a column, but it is not possible to add this data to a column with a different encoding. No transliteration is performed between the source and destination encodings, which may result in errors.

The optional COLLATE clause allows you to specify the collation sequence for character data types, including BLOB SUB_TYPE TEXT. If no collation sequence is specified, the default collation sequence for the specified character set - at time of the creation of the column - is applied.

5.4.1.2 Setting a DEFAULT Value

The optional DEFAULT clause allows you to specify the default value for the table column. This value will be added to the column when an INSERT statement is executed if no value was specified for it and that column was omitted from the INSERT command.

The default value can be a literal of a compatible type, a context variable that is type-compatible with the data type of the column, or NULL, if the column allows it. If no default value is explicitly specified, NULL is implied.

An expression cannot be used as a default value.

5.4.1.3 Domain-based Columns

To define a column, you can use a previously defined domain. If the definition of a column is based on a domain, it may contain a new default value, additional CHECK constraints, and a COLLATE clause that will override the values specified in the domain definition. The definition of such a column may contain additional column constraints (for instance, NOT NULL), if the domain does not have it.

Important

It is not possible to define a domain-based column that is nullable if the domain was defined with the NOT NULL attribute. If you want to have a domain that might be used for defining both nullable and non-nullable columns and variables, it is better practice defining the domain nullable and apply NOT NULL in the downstream column definitions and variable declarations.

5.4.1.4 Identity Columns (Autoincrement)

Identity columns are defined using the GENERATED {ALWAYS | BY DEFAULT} AS IDENTITY clause. The identity column is a column associated with an internal sequence generator. Its value is set automatically every time it is not specified in the INSERT statement, or when the column value is specified as DEFAULT.

Rules
  • The data type of an identity column must be an exact number type with zero scale. Allowed types are SMALLINT, INTEGER, BIGINT, NUMERIC(*p*[,0]) and DECIMAL(*p*[,0]) with p <= 18.

    • The INT128 type and numeric types with a precision higher than 18 are not supported.
  • An identity column cannot have a DEFAULT or COMPUTED value.

  • An identity column can be altered to become a regular column.

  • A regular column cannot be altered to become an identity column.

  • Identity columns are implicitly NOT NULL (non-nullable), and cannot be made nullable.

  • Uniqueness is not enforced automatically. A UNIQUE or PRIMARY KEY constraint is required to guarantee uniqueness.

  • The use of other methods of generating key values for identity columns, e.g. by trigger-generator code or by allowing users to change or add them, is discouraged to avoid unexpected key violations.

  • The INCREMENT value cannot be zero (0).

5.4.1.4.1 GENERATED ALWAYS

An identity column of type GENERATED ALWAYS will always generate a column value on insert. Explicitly inserting a value into a column of this type is not allowed, unless either:

  1. the specified value is DEFAULT; this generates the identity value as normal.

  2. the OVERRIDING SYSTEM VALUE clause is specified in the INSERT statement; this allows a user value to be inserted.

5.4.1.4.2 GENERATED BY DEFAULT

An identity column of type GENERATED BY DEFAULT will generate a value on insert if no value — other than DEFAULT — is specified on insert. When the OVERRIDING USER VALUE clause is specified in the INSERT statement, the user-provided value is ignored, and an identity value is generated (as if the column was not included in the insert, or the value DEFAULT was specified).

5.4.1.4.3 START WITH Option

The optional START WITH clause allows you to specify an initial value other than 1.

Note

Previous versions of Firebird instead used the specified value as the initial value of the internal generator backing the identity column, so the first value was 1 higher than the START WITH value.

This has been fixed in Firebird 4.0 and now the first value generated is the START WITH value, see also firebird#6615.

5.4.1.4.4 INCREMENT Option

The optional INCREMENT clause allows you to specify another non-zero step value than 1.

Warning

The SQL standard specifies that if INCREMENT is specified with a negative value, and START WITH is not specified, that the first value generated should be the maximum of the column type (e.g. 231 - 1 for INTEGER). Instead, Firebird will start at 1.

5.4.1.5 Calculated Fields

Calculated fields can be defined with the COMPUTED [BY] or GENERATED ALWAYS AS clause (according to the SQL:2003 standard). They mean the same. Describing the data type is not required (but possible) for calculated fields, as the DBMS calculates and stores the appropriate type as a result of the expression analysis. Appropriate operations for the data types included in an expression must be specified precisely.

If the data type is explicitly specified for a calculated field, the calculation result is converted to the specified type. This means, for instance, that the result of a numeric expression could be rendered as a string.

In a query that selects a COMPUTED BY column, the expression is evaluated for each row of the selected data.

Tip

Instead of a computed column, in some cases it makes sense to use a regular column whose value is evaluated in triggers for adding and updating data. It may reduce the performance of inserting/updating records, but it will increase the performance of data selection.

5.4.1.6 Defining an Array Column

  • If the column is to be an array, the base type can be any SQL data type except BLOB and array.

  • The dimensions of the array are specified between square brackets. (In the Syntax block these brackets appear in quotes to distinguish them from the square brackets that identify optional syntax elements.)

  • For each array dimension, one or two integer numbers define the lower and upper boundaries of its index range:

    • By default, arrays are 1-based. The lower boundary is implicit and only the upper boundary need be specified. A single number smaller than 1 defines the range num..1 and a number greater than 1 defines the range 1..num.

    • Two numbers separated by a colon (:) and optional whitespace, the second greater than the first, can be used to define the range explicitly. One or both boundaries can be less than zero, as long as the upper boundary is greater than the lower.

  • When the array has multiple dimensions, the range definitions for each dimension must be separated by commas and optional whitespace.

  • Subscripts are validated only if an array actually exists. It means that no error messages regarding invalid subscripts will be returned if selecting a specific element returns nothing or if an array field is NULL.

5.4.1.7 Constraints

Five types of constraints can be specified. They are:

  • Primary key (PRIMARY KEY)

  • Unique key (UNIQUE)

  • Foreign key (REFERENCES)

  • CHECK constraint (CHECK)

  • NOT NULL constraint (NOT NULL)

Constraints can be specified at column level (column constraints) or at table level (table constraints). Table-level constraints are required when keys (unique constraint, Primary Key, Foreign Key) consist of multiple columns and when a CHECK constraint involves other columns in the row besides the column being defined. The NOT NULL constraint can only be specified as a column constraint. Syntax for some types of constraint may differ slightly according to whether the constraint is defined at the column or table level.

  • A column-level constraint is specified during a column definition, after all column attributes except COLLATION are specified, and can involve only the column specified in that definition

  • A table-level constraints can only be specified after the definitions of the columns used in the constraint.

  • Table-level constraints are a more flexible way to set constraints, since they can cater for constraints involving multiple columns

  • You can mix column-level and table-level constraints in the same CREATE TABLE statement

The system automatically creates the corresponding index for a primary key (PRIMARY KEY), a unique key (UNIQUE) and a foreign key (REFERENCES for a column-level constraint, FOREIGN KEY REFERENCES for one at the table level).

5.4.1.7.1 Names for Constraints and Their Indexes

Column-level constraints and their indexes are named automatically:

  • The constraint name has the form INTEG_n, where n represents one or more digits

  • The index name has the form RDB$PRIMARYn (for a primary key index), RDB$FOREIGNn (for a foreign key index) or RDB$n (for a unique key index). Again, n represents one or more digits.

Automatic naming of table-level constraints and their indexes follows the same pattern, unless the names are supplied explicitly.

5.4.1.7.1.1 Named Constraints

A constraint can be named explicitly if the CONSTRAINT clause is used for its definition. While the CONSTRAINT clause is optional for defining column-level constraints, it is mandatory for table-level constraints. By default, the constraint index will have the same name as the constraint. If a different name is wanted for the constraint index, a USING clause can be included.

5.4.1.7.1.2 The USING Clause

The USING clause allows you to specify a user-defined name for the index that is created automatically and, optionally, to define the direction of the index — either ascending (the default) or descending.

5.4.1.7.2 PRIMARY KEY

The PRIMARY KEY constraint is built on one or more key columns, where each column has the NOT NULL constraint specified. The values across the key columns in any row must be unique. A table can have only one primary key.

  • A single-column Primary Key can be defined as a column level or a table-level constraint

  • A multi-column Primary Key must be specified as a table-level constraint

5.4.1.7.3 The UNIQUE Constraint

The UNIQUE constraint defines the requirement of content uniqueness for the values in a key throughout the table. A table can contain any number of unique key constraints.

As with the Primary Key, the Unique constraint can be multi-column. If so, it must be specified as a table-level constraint.

5.4.1.7.3.1 NULL in Unique Keys

Firebird’s SQL-99-compliant rules for UNIQUE constraints allow one or more NULLs in a column with a UNIQUE constraint. That makes it possible to define a UNIQUE constraint on a column that does not have the NOT NULL constraint.

For UNIQUE keys that span multiple columns, the logic is a little complicated:

  • Multiple rows having null in all the columns of the key are allowed

  • Multiple rows having keys with different combinations of nulls and non-null values are allowed

  • Multiple rows having the same key columns null and the rest filled with non-null values are allowed, provided the values differ in at least one column

  • Multiple rows having the same key columns null and the rest filled with non-null values that are the same in every column will violate the constraint

The rules for uniqueness can be summarised thus:

In principle, all nulls are considered distinct. However, if two rows have exactly the same key columns filled with non-null values, the NULL columns are ignored and the uniqueness is determined on the non-null columns as though they constituted the entire key.

Illustration

  1. RECREATE TABLE t( x int, y int, z int, unique(x,y,z));
  2. INSERT INTO t values( NULL, 1, 1 );
  3. INSERT INTO t values( NULL, NULL, 1 );
  4. INSERT INTO t values( NULL, NULL, NULL );
  5. INSERT INTO t values( NULL, NULL, NULL ); -- Permitted
  6. INSERT INTO t values( NULL, NULL, 1 ); -- Not permitted
5.4.1.7.4 FOREIGN KEY

A Foreign Key ensures that the participating column(s) can contain only values that also exist in the referenced column(s) in the master table. These referenced columns are often called target columns. They must be the primary key or a unique key in the target table. They need not have a NOT NULL constraint defined on them although, if they are the primary key, they will, of course, have that constraint.

The foreign key columns in the referencing table itself do not require a NOT NULL constraint.

A single-column Foreign Key can be defined in the column declaration, using the keyword REFERENCES:

  1. ... ,
  2. ARTIFACT_ID INTEGER REFERENCES COLLECTION (ARTIFACT_ID),

The column ARTIFACT_ID in the example references a column of the same name in the table COLLECTIONS.

Both single-column and multi-column foreign keys can be defined at the table level. For a multi-column Foreign Key, the table-level declaration is the only option. This method also enables the provision of an optional name for the constraint:

  1. ...
  2. CONSTRAINT FK_ARTSOURCE FOREIGN KEY(DEALER_ID, COUNTRY)
  3. REFERENCES DEALER (DEALER_ID, COUNTRY),

Notice that the column names in the referenced (master) table may differ from those in the Foreign Key.

Note

If no target columns are specified, the Foreign Key automatically references the target table’s Primary Key.

5.4.1.7.4.1 Foreign Key Actions

With the sub-clauses ON UPDATE and ON DELETE it is possible to specify an action to be taken on the affected foreign key column(s) when referenced values in the master table are changed:

NO ACTION

(the default) - Nothing is done

CASCADE

The change in the master table is propagated to the corresponding row(s) in the child table. If a key value changes, the corresponding key in the child records changes to the new value; if the master row is deleted, the child records are deleted.

SET DEFAULT

The Foreign Key columns in the affected rows will be set to their default values as they were when the foreign key constraint was defined.

SET NULL

The Foreign Key columns in the affected rows will be set to NULL.

The specified action, or the default NO ACTION, could cause a Foreign Key column to become invalid. For example, it could get a value that is not present in the master table, or it could become NULL while the column has a NOT NULL constraint. Such conditions will cause the operation on the master table to fail with an error message.

Example

  1. ...
  2. CONSTRAINT FK_ORDERS_CUST
  3. FOREIGN KEY (CUSTOMER) REFERENCES CUSTOMERS (ID)
  4. ON UPDATE CASCADE ON DELETE SET NULL
5.4.1.7.5 CHECK Constraint

The CHECK constraint defines the condition the values inserted in this column must satisfy. A condition is a logical expression (also called a predicate) that can return the TRUE, FALSE and UNKNOWN values. A condition is considered satisfied if the predicate returns TRUE or value UNKNOWN (equivalent to NULL). If the predicate returns FALSE, the value will not be accepted. This condition is used for inserting a new row into the table (the INSERT statement) and for updating the existing value of the table column (the UPDATE statement) and also for statements where one of these actions may take place (UPDATE OR INSERT, MERGE).

Important

A CHECK constraint on a domain-based column does not replace an existing CHECK condition on the domain, but becomes an addition to it. The Firebird engine has no way, during definition, to verify that the extra CHECK does not conflict with the existing one.

CHECK constraints — whether defined at table level or column level — refer to table columns by their names. The use of the keyword VALUE as a placeholder — as in domain CHECK constraints — is not valid in the context of defining column constraints.

Examplewith two column-level constraints and one at table-level:

  1. CREATE TABLE PLACES (
  2. ...
  3. LAT DECIMAL(9, 6) CHECK (ABS(LAT) <= 90),
  4. LON DECIMAL(9, 6) CHECK (ABS(LON) <= 180),
  5. ...
  6. CONSTRAINT CHK_POLES CHECK (ABS(LAT) < 90 OR LON = 0)
  7. );
5.4.1.7.6 NOT NULL Constraint

In Firebird, columns are nullable by default. The NOT NULL constraint specifies that the column cannot take NULL in place of a value.

A NOT NULL constraint can only be defined as a column constraint, not as a table constraint.

5.4.1.8 SQL SECURITY Clause

The SQL SECURITY clause specifies the security context for executing functions referenced in calculated columns, and check constraints, and the default context used for triggers fired for this table. When SQL Security is not specified, the default value of the database is applied at runtime.

See also SQL Security in chapter Security.

5.4.1.9 Replication Management

When the database has been configured using ALTER DATABASE INCLUDE ALL TO PUBLICATION, new tables will automatically be added for publication, unless overridden using the DISABLE PUBLICATION clause.

If the database has not been configured for INCLUDE ALL (or has later been reconfigured using ALTER DATABASE EXCLUDE ALL FROM PUBLICATION), new tables will not automatically be added for publication. To include tables for publication, the ENABLE PUBLICATION clause must be used.

5.4.1.10 Who Can Create a Table

The CREATE TABLE statement can be executed by:

The user executing the CREATE TABLE statement becomes the owner of the table.

5.4.1.11 CREATE TABLE Examples

  1. Creating the COUNTRY table with the primary key specified as a column constraint.

    1. CREATE TABLE COUNTRY (
    2. COUNTRY COUNTRYNAME NOT NULL PRIMARY KEY,
    3. CURRENCY VARCHAR(10) NOT NULL
    4. );
  2. Creating the STOCK table with the named primary key specified at the column level and the named unique key specified at the table level.

    1. CREATE TABLE STOCK (
    2. MODEL SMALLINT NOT NULL CONSTRAINT PK_STOCK PRIMARY KEY,
    3. MODELNAME CHAR(10) NOT NULL,
    4. ITEMID INTEGER NOT NULL,
    5. CONSTRAINT MOD_UNIQUE UNIQUE (MODELNAME, ITEMID)
    6. );
  3. Creating the JOB table with a primary key constraint spanning two columns, a foreign key constraint for the COUNTRY table and a table-level CHECK constraint. The table also contains an array of 5 elements.

    1. CREATE TABLE JOB (
    2. JOB_CODE JOBCODE NOT NULL,
    3. JOB_GRADE JOBGRADE NOT NULL,
    4. JOB_COUNTRY COUNTRYNAME,
    5. JOB_TITLE VARCHAR(25) NOT NULL,
    6. MIN_SALARY NUMERIC(18, 2) DEFAULT 0 NOT NULL,
    7. MAX_SALARY NUMERIC(18, 2) NOT NULL,
    8. JOB_REQUIREMENT BLOB SUB_TYPE 1,
    9. LANGUAGE_REQ VARCHAR(15) [1:5],
    10. PRIMARY KEY (JOB_CODE, JOB_GRADE),
    11. FOREIGN KEY (JOB_COUNTRY) REFERENCES COUNTRY (COUNTRY)
    12. ON UPDATE CASCADE
    13. ON DELETE SET NULL,
    14. CONSTRAINT CHK_SALARY CHECK (MIN_SALARY < MAX_SALARY)
    15. );
  4. Creating the PROJECT table with primary, foreign and unique key constraints with custom index names specified with the USING clause.

    1. CREATE TABLE PROJECT (
    2. PROJ_ID PROJNO NOT NULL,
    3. PROJ_NAME VARCHAR(20) NOT NULL UNIQUE USING DESC INDEX IDX_PROJNAME,
    4. PROJ_DESC BLOB SUB_TYPE 1,
    5. TEAM_LEADER EMPNO,
    6. PRODUCT PRODTYPE,
    7. CONSTRAINT PK_PROJECT PRIMARY KEY (PROJ_ID) USING INDEX IDX_PROJ_ID,
    8. FOREIGN KEY (TEAM_LEADER) REFERENCES EMPLOYEE (EMP_NO)
    9. USING INDEX IDX_LEADER
    10. );
  5. Creating a table with an identity column

    1. create table objects (
    2. id integer generated by default as identity primary key,
    3. name varchar(15)
    4. );
    5. insert into objects (name) values ('Table');
    6. insert into objects (id, name) values (10, 'Computer');
    7. insert into objects (name) values ('Book');
    8. select * from objects order by id;
    9. ID NAME
    10. ============ ===============
    11. 1 Table
    12. 2 Book
    13. 10 Computer
  6. Creating the SALARY_HISTORY table with two computed fields. The first one is declared according to the SQL:2003 standard, while the second one is declared according to the traditional declaration of computed fields in Firebird.

    1. CREATE TABLE SALARY_HISTORY (
    2. EMP_NO EMPNO NOT NULL,
    3. CHANGE_DATE TIMESTAMP DEFAULT 'NOW' NOT NULL,
    4. UPDATER_ID VARCHAR(20) NOT NULL,
    5. OLD_SALARY SALARY NOT NULL,
    6. PERCENT_CHANGE DOUBLE PRECISION DEFAULT 0 NOT NULL,
    7. SALARY_CHANGE GENERATED ALWAYS AS
    8. (OLD_SALARY * PERCENT_CHANGE / 100),
    9. NEW_SALARY COMPUTED BY
    10. (OLD_SALARY + OLD_SALARY * PERCENT_CHANGE / 100)
    11. );
  7. With DEFINER set for table t, user US needs only the SELECT privilege on t. If it were set for INVOKER, the user would also need the EXECUTE privilege on function f.

    1. set term ^;
    2. create function f() returns int
    3. as
    4. begin
    5. return 3;
    6. end^
    7. set term ;^
    8. create table t (i integer, c computed by (i + f())) SQL SECURITY DEFINER;
    9. insert into t values (2);
    10. grant select on table t to user us;
    11. commit;
    12. connect 'localhost:/tmp/7.fdb' user us password 'pas';
    13. select * from t;
  8. With DEFINER set for table tr, user US needs only the INSERT privilege on tr. If it were set for INVOKER, either the user or the trigger would also need the INSERT privilege on table t. The result would be the same if SQL SECURITY DEFINER were specified for trigger tr_ins:

    1. create table tr (i integer) SQL SECURITY DEFINER;
    2. create table t (i integer);
    3. set term ^;
    4. create trigger tr_ins for tr after insert
    5. as
    6. begin
    7. insert into t values (NEW.i);
    8. end^
    9. set term ;^
    10. grant insert on table tr to user us;
    11. commit;
    12. connect 'localhost:/tmp/29.fdb' user us password 'pas';
    13. insert into tr values(2);

5.4.1.12 Global Temporary Tables (GTT)

Global temporary tables have persistent metadata, but their contents are transaction-bound (the default) or connection-bound. Every transaction or connection has its own private instance of a GTT, isolated from all the others. Instances are only created if and when the GTT is referenced. They are destroyed when the transaction ends or on disconnection. The metadata of a GTT can be modified or removed using ALTER TABLE and DROP TABLE, respectively.

Syntax

  1. CREATE GLOBAL TEMPORARY TABLE tablename
  2. (<column_def> [, {<column_def> | <table_constraint>} ...])
  3. [<gtt_table_attrs>]
  4. <gtt_table_attrs> ::= <gtt_table_attr> [gtt_table_attr> ...]
  5. <gtt_table_attr> ::=
  6. <sql_security>
  7. | ON COMMIT {DELETE | PRESERVE} ROWS

Syntax notes

  • ON COMMIT DELETE ROWS creates a transaction-level GTT (the default), ON COMMIT PRESERVE ROWS a connection-level GTT

  • An EXTERNAL [FILE] clause is not allowed in the definition of a global temporary table

GTTs are writable in read-only transactions. The effect is as follows:

Read-only transaction in read-write database

Writable in both ON COMMIT PRESERVE ROWS and ON COMMIT DELETE ROWS

Read-only transaction in read-only database

Writable in ON COMMIT DELETE ROWS only

5.4.1.12.1 Restrictions on GTTs

GTTs can be dressed up with all the features and paraphernalia of ordinary tables (keys, references, indexes, triggers and so on) but there are a few restrictions:

  • GTTs and regular tables cannot reference one another

  • A connection-bound (PRESERVE ROWS) GTT cannot reference a transaction-bound (DELETE ROWS) GTT

  • Domain constraints cannot reference any GTT

  • The destruction of a GTT instance at the end of its life cycle does not cause any BEFORE/AFTER delete triggers to fire

Tip

In an existing database, it is not always easy to distinguish a regular table from a GTT, or a transaction-level GTT from a connection-level GTT. Use this query to find out what type of table you are looking at:

  1. select t.rdb$type_name
  2. from rdb$relations r
  3. join rdb$types t on r.rdb$relation_type = t.rdb$type
  4. where t.rdb$field_name = 'RDB$RELATION_TYPE'
  5. and r.rdb$relation_name = 'TABLENAME'

For an overview of the types of all the relations in the database:

  1. select r.rdb$relation_name, t.rdb$type_name
  2. from rdb$relations r
  3. join rdb$types t on r.rdb$relation_type = t.rdb$type
  4. where t.rdb$field_name = 'RDB$RELATION_TYPE'
  5. and coalesce (r.rdb$system_flag, 0) = 0

The RDB$TYPE_NAME field will show PERSISTENT for a regular table, VIEW for a view, GLOBAL_TEMPORARY_PRESERVE for a connection-bound GTT and GLOBAL_TEMPORARY_DELETE for a transaction_bound GTT.

5.4.1.12.2 Examples of Global Temporary Tables
  1. Creating a connection-scoped global temporary table.

    1. CREATE GLOBAL TEMPORARY TABLE MYCONNGTT (
    2. ID INTEGER NOT NULL PRIMARY KEY,
    3. TXT VARCHAR(32),
    4. TS TIMESTAMP DEFAULT CURRENT_TIMESTAMP)
    5. ON COMMIT PRESERVE ROWS;
  2. Creating a transaction-scoped global temporary table that uses a foreign key to reference a connection-scoped global temporary table. The ON COMMIT sub-clause is optional because DELETE ROWS is the default.

    1. CREATE GLOBAL TEMPORARY TABLE MYTXGTT (
    2. ID INTEGER NOT NULL PRIMARY KEY,
    3. PARENT_ID INTEGER NOT NULL REFERENCES MYCONNGTT(ID),
    4. TXT VARCHAR(32),
    5. TS TIMESTAMP DEFAULT CURRENT_TIMESTAMP
    6. ) ON COMMIT DELETE ROWS;

5.4.1.13 External Tables

The optional EXTERNAL [FILE] clause specifies that the table is stored outside the database in an external text file of fixed-length records. The columns of a table stored in an external file can be of any type except BLOB or ARRAY, although for most purposes, only columns of CHAR types would be useful.

All you can do with a table stored in an external file is insert new rows (INSERT) and query the data (SELECT). Updating existing data (UPDATE) and deleting rows (DELETE) are not possible.

A file that is defined as an external table must be located on a storage device that is physically present on the machine where the Firebird server runs and, if the parameter ExternalFileAccess in the firebird.conf configuration file is Restrict, it must be in one of the directories listed there as the argument for Restrict. If the file does not exist yet, Firebird will create it on first access.

Important

The ability to use external files for a table depends on the value set for the ExternalFileAccess parameter in firebird.conf:

  • If it is set to None (the default), any attempt to access an external file will be denied.

  • The Restrict setting is recommended, for restricting external file access to directories created explicitly for the purpose by the server administrator. For example:

    • ExternalFileAccess = Restrict externalfiles will restrict access to a directory named externalfiles directly beneath the Firebird root directory

    • ExternalFileAccess = d:\databases\outfiles; e:\infiles will restrict access to just those two directories on the Windows host server. Note that any path that is a network mapping will not work. Paths enclosed in single or double quotes will not work, either.

  • If this parameter is set to Full, external files may be accessed anywhere on the host file system. This creates a security vulnerability and is not recommended.

5.4.1.13.1 External File Format

The row format of the external table is fixed length and binary. There are no field delimiters: both field and row boundaries are determined by maximum sizes, in bytes, of the field definitions. It is important to keep this in mind, both when defining the structure of the external table and when designing an input file for an external table that is to import data from another application. The ubiquitous .csv format, for example, is of no use as an input file and cannot be generated directly into an external file.

The most useful data type for the columns of external tables is the fixed-length CHAR type, of suitable lengths for the data they are to carry. Date and number types are easily cast to and from strings whereas, unless the files are to be read by another Firebird database, the native data types — binary data — will appear to external applications as unparseable alphabetti.

Of course, there are ways to manipulate typed data so as to generate output files from Firebird that can be read directly as input files to other applications, using stored procedures, with or without employing external tables. Such techniques are beyond the scope of a language reference. Here, we provide some guidelines and tips for producing and working with simple text files, since the external table feature is often used as an easy way to produce or read transaction-independent logs that can be studied off-line in a text editor or auditing application.

5.4.1.13.1.1 Row Delimiters

Generally, external files are more useful if rows are separated by a delimiter, in the form of a newline sequence that is recognised by reader applications on the intended platform. For most contexts on Windows, it is the two-byte ‘CRLF’ sequence, carriage return (ASCII code decimal 13) and line feed (ASCII code decimal 10). On POSIX, LF on its own is usual; for some MacOSX applications, it may be LFCR. There are various ways to populate this delimiter column. In our example below, it is done by using a BEFORE INSERT trigger and the internal function ASCII_CHAR.

5.4.1.13.1.2 External Table Example

For our example, we will define an external log table that might be used by an exception handler in a stored procedure or trigger. The external table is chosen because the messages from any handled exceptions will be retained in the log, even if the transaction that launched the process is eventually rolled back because of another, unhandled exception. For demonstration purposes, it has just two data columns, a time stamp and a message. The third column stores the row delimiter:

  1. CREATE TABLE ext_log
  2. EXTERNAL FILE 'd:\externals\log_me.txt' (
  3. stamp CHAR (24),
  4. message CHAR(100),
  5. crlf CHAR(2) -- for a Windows context
  6. );
  7. COMMIT;

Now, a trigger, to write the timestamp and the row delimiter each time a message is written to the file:

  1. SET TERM ^;
  2. CREATE TRIGGER bi_ext_log FOR ext_log
  3. ACTIVE BEFORE INSERT
  4. AS
  5. BEGIN
  6. IF (new.stamp is NULL) then
  7. new.stamp = CAST (CURRENT_TIMESTAMP as CHAR(24));
  8. new.crlf = ASCII_CHAR(13) || ASCII_CHAR(10);
  9. END ^
  10. COMMIT ^
  11. SET TERM ;^

Inserting some records (which could have been done by an exception handler or a fan of Shakespeare):

  1. insert into ext_log (message)
  2. values('Shall I compare thee to a summer''s day?');
  3. insert into ext_log (message)
  4. values('Thou art more lovely and more temperate');

The output:

  1. 2015-10-07 15:19:03.4110Shall I compare thee to a summer's day?
  2. 2015-10-07 15:19:58.7600Thou art more lovely and more temperate

5.4.2 ALTER TABLE

Used forAltering the structure of a table.

Available inDSQL, ESQL

Syntax

  1. ALTER TABLE tablename
  2. <operation> [, <operation> ...]
  3. <operation> ::=
  4. ADD <col_def>
  5. | ADD <tconstraint>
  6. | DROP colname
  7. | DROP CONSTRAINT constr_name
  8. | ALTER [COLUMN] colname <col_mod>
  9. | ALTER SQL SECURITY {INVOKER | DEFINER}
  10. | DROP SQL SECURITY
  11. | {ENABLE | DISABLE} PUBLICATION
  12. <col_mod> ::=
  13. TO newname
  14. | POSITION newpos
  15. | <regular_col_mod>
  16. | <computed_col_mod>
  17. | <identity_col_mod>
  18. <regular_col_mod> ::=
  19. TYPE {<datatype> | domainname}
  20. | SET DEFAULT {<literal> | NULL | <context_var>}
  21. | DROP DEFAULT
  22. | {SET | DROP} NOT NULL
  23. <computed_col_mod> ::=
  24. [TYPE <datatype>] {COMPUTED [BY] | GENERATED ALWAYS AS} (<expression>)
  25. <identity_col_mod> ::=
  26. SET GENERATED {ALWAYS | BY DEFAULT} [<identity_mod_option>...]
  27. | <identity_mod_options>...
  28. | DROP IDENTITY
  29. <identity_mod_options> ::=
  30. RESTART [WITH restart_value]
  31. | SET INCREMENT [BY] inc_value
  32. !! See CREATE TABLE syntax for further rules !!

Table 5.4.2.1 ALTER TABLE Statement Parameters

ParameterDescription

tablename

Name (identifier) of the table

operation

One of the available operations altering the structure of the table

colname

Name (identifier) for a column in the table. The maximum length is 63 characters. Must be unique in the table.

domain_name

Domain name

newname

New name (identifier) for the column. The maximum length is 63 characters. Must be unique in the table.

newpos

The new column position (an integer between 1 and the number of columns in the table)

other_table

The name of the table referenced by the foreign key constraint

literal

A literal value that is allowed in the given context

context_var

A context variable whose type is allowed in the given context

check_condition

The condition of a CHECK constraint that will be satisfied if it evaluates to TRUE or UNKNOWN/NULL

restart_value

The first value of the identity column after restart

inc_value

The increment (or step) value of the identity column; zero (0) is not allowed.

The ALTER TABLE statement changes the structure of an existing table. With one ALTER TABLE statement it is possible to perform multiple operations, adding/dropping columns and constraints and also altering column specifications.

Multiple operations in an ALTER TABLE statement are separated with commas.

5.4.2.1 Version Count Increments

Some changes in the structure of a table increment the metadata change counter (version count) assigned to every table. The number of metadata changes is limited to 255 for each table, or 32,000 for each view. Once the counter reaches this limit, you will not be able to make any further changes to the structure of the table or view without resetting the counter.

To reset the metadata change counter

You need to back up and restore the database using the gbak utility.

5.4.2.2 The ADD Clause

With the ADD clause you can add a new column or a new table constraint. The syntax for defining the column and the syntax of defining the table constraint correspond with those described for CREATE TABLE statement.

Effect on Version Count

  • Each time a new column is added, the metadata change counter is increased by one

  • Adding a new table constraint does not increase the metadata change counter

Points to Be Aware of

  1. Adding a column with a NOT NULL constraint without a DEFAULT value will fail if the table has existing rows. When adding a non-nullable column, it is recommended either to set a default value for it, or to create it as nullable, update the column in existing rows with a non-null value, and then add a NOT NULL constraint.

  2. When a new CHECK constraint is added, existing data is not tested for compliance. Prior testing of existing data against the new CHECK expression is recommended.

  3. Although adding an identity column is supported, this will only succeed if the table is empty. Adding an identity column will fail if the table has one or more rows.

5.4.2.3 The DROP Clause

The DROP *colname* clause deletes the specified column from the table. An attempt to drop a column will fail if anything references it. Consider the following items as sources of potential dependencies:

  • column or table constraints

  • indexes

  • stored procedures and triggers

  • views

Effect on Version Count

  • Each time a column is dropped, the table’s metadata change counter is increased by one.

5.4.2.4 The DROP CONSTRAINT Clause

The DROP CONSTRAINT clause deletes the specified column-level or table-level constraint.

A PRIMARY KEY or UNIQUE key constraint cannot be deleted if it is referenced by a FOREIGN KEY constraint in another table. It will be necessary to drop that FOREIGN KEY constraint before attempting to drop the PRIMARY KEY or UNIQUE key constraint it references.

Effect on Version Count

  • Deleting a column constraint or a table constraint does not increase the metadata change counter.

5.4.2.5 The ALTER [COLUMN] Clause

With the ALTER [COLUMN] clause, attributes of existing columns can be modified without the need to drop and re-add the column. Permitted modifications are:

  • change the name (does not affect the metadata change counter)

  • change the data type (increases the metadata change counter by one)

  • change the column position in the column list of the table (does not affect the metadata change counter)

  • delete the default column value (does not affect the metadata change counter)

  • set a default column value or change the existing default (does not affect the metadata change counter)

  • change the type and expression for a computed column (does not affect the metadata change counter)

  • set the NOT NULL constraint (does not affect the metadata change counter)

  • drop the NOT NULL constraint (does not affect the metadata change counter)

  • change the type of an identity column, or change an identity column to a regular column

  • restart an identity column

  • change the increment of an identity column

5.4.2.6 Renaming a Column: the TO Clause

The TO keyword with a new identifier renames an existing column. The table must not have an existing column that has the same identifier.

It will not be possible to change the name of a column that is included in any constraint: PRIMARY KEY, UNIQUE key, FOREIGN KEY, column constraint or the CHECK constraint of the table.

Renaming a column will also be disallowed if the column is used in any trigger, stored procedure or view.

5.4.2.7 Changing the Data Type of a Column: the TYPE Clause

The keyword TYPE changes the data type of an existing column to another, allowable type. A type change that might result in data loss will be disallowed. As an example, the number of characters in the new type for a CHAR or VARCHAR column cannot be smaller than the existing specification for it.

If the column was declared as an array, no change to its type or its number of dimensions is permitted.

The data type of a column that is involved in a foreign key, primary key or unique constraint cannot be changed at all.

5.4.2.8 Changing the Position of a Column: the POSITION Clause

The POSITION keyword changes the position of an existing column in the notional left-to-right layout of the record.

Numbering of column positions starts at 1.

  • If a position less than 1 is specified, an error message will be returned

  • If a position number is greater than the number of columns in the table, its new position will be adjusted silently to match the number of columns.

5.4.2.9 The DROP DEFAULT and SET DEFAULT Clauses

The optional DROP DEFAULT clause deletes the default value for the column if it was put there previously by a CREATE TABLE or ALTER TABLE statement.

  • If the column is based on a domain with a default value, the default value will revert to the domain default

  • An execution error will be raised if an attempt is made to delete the default value of a column which has no default value or whose default value is domain-based

The optional SET DEFAULT clause sets a default value for the column. If the column already has a default value, it will be replaced with the new one. The default value applied to a column always overrides one inherited from a domain.

5.4.2.10 The SET NOT NULL and DROP NOT NULL Clauses

The SET NOT NULL clause adds a NOT NULL constraint on an existing table column. Contrary to definition in CREATE TABLE, it is not possible to specify a constraint name.

Note

The successful addition of the NOT NULL constraint is subject to a full data validation on the table, so ensure that the column has no nulls before attempting the change.

An explicit NOT NULL constraint on domain-based column overrides domain settings. In this scenario, changing the domain to be nullable does not extend to a table column.

Dropping the NOT NULL constraint from the column if its type is a domain that also has a NOT NULL constraint, has no observable effect until the NOT NULL constraint is dropped from the domain as well.

5.4.2.11 The COMPUTED [BY] or GENERATED ALWAYS AS Clauses

The data type and expression underlying a computed column can be modified using a COMPUTED [BY] or GENERATED ALWAYS AS clause in the ALTER TABLE ALTER [COLUMN] statement. Converting a regular column to a computed one and vice versa are not permitted.

5.4.2.12 Changing Identity Columns

or identity columns (SET GENERATED {ALWAYS | BY DEFAULT}) it is possible to modify several properties using the following clauses.

5.4.2.12.1 Identity Type

The SET GENERATED {ALWAYS | BY DEFAULT} changes an identity column from ALWAYS to BY DEFAULT and vice versa. It is not possible to use this to change a regular column to an identity column.

5.4.2.12.2 RESTART

The RESTART clause restarts the sequence used for generating identity values. If only the RESTART clause is specified, then the sequence resets to the initial value specified when the identity column was defined. If the optional WITH *restart_value* clause is specified, the sequence will restart with the specified value.

Note

In Firebird 3.0, RESTART WITH *restart_value* would also change the configured initial value to restart_value. This was not compliant with the SQL standard, so in Firebird 4.0, RESTART WITH *restart_value* will only restart the sequence with the specified value. Subsequent RESTARTs (without WITH) will use the START WITH value specified when the identity column was defined.

It is currently not possible to change the configured start value.

5.4.2.12.3 SET INCREMENT

The SET INCREMENT clause changes the increment of the identity column.

5.4.2.12.4 DROP IDENTITY

The DROP IDENTITY clause will change an identity column to a regular column.

Note

It is not possible to change a regular column to an identity column.

5.4.2.13 Changing SQL Security

Using the ALTER SQL SECURITY or DROP SQL SECURITY clauses, it is possible to change or drop the SQL Security property of a table. After dropping SQL Security, the default value of the database is applied at runtime.

Note

If the SQL Security property is changed for a table, triggers that do not have an explicit SQL Security property will not see the effect of the change until the next time the trigger is loaded into the metadata cache.

5.4.2.14 Replication Management

To stop replicating a table, use the DISABLE PUBLICATION clause. To start replicating a table, use the ENABLE PUBLICATION clause.

The change in publication status takes effect at commit.

5.4.2.15 Attributes that Cannot Be Altered

The following alterations are not supported:

  • Changing the collation of a character type column

5.4.2.16 Who Can Alter a Table?

The ALTER TABLE statement can be executed by:

  • Administrators

  • The owner of the table

  • Users with the ALTER ANY TABLE privilege

5.4.2.17 Examples Using ALTER TABLE

  1. Adding the CAPITAL column to the COUNTRY table.

    1. ALTER TABLE COUNTRY
    2. ADD CAPITAL VARCHAR(25);
  2. Adding the CAPITAL column with the NOT NULL and UNIQUE constraint and deleting the CURRENCY column.

    1. ALTER TABLE COUNTRY
    2. ADD CAPITAL VARCHAR(25) NOT NULL UNIQUE,
    3. DROP CURRENCY;
  3. Adding the CHK_SALARY check constraint and a foreign key to the JOB table.

    1. ALTER TABLE JOB
    2. ADD CONSTRAINT CHK_SALARY CHECK (MIN_SALARY < MAX_SALARY),
    3. ADD FOREIGN KEY (JOB_COUNTRY) REFERENCES COUNTRY (COUNTRY);
  4. Setting default value for the MODEL field, changing the type of the ITEMID column and renaming the MODELNAME column.

    1. ALTER TABLE STOCK
    2. ALTER COLUMN MODEL SET DEFAULT 1,
    3. ALTER COLUMN ITEMID TYPE BIGINT,
    4. ALTER COLUMN MODELNAME TO NAME;
  5. Restarting the sequence of an identity column.

    1. ALTER TABLE objects
    2. ALTER ID RESTART WITH 100;
  6. Changing the computed columns NEW_SALARY and SALARY_CHANGE.

    1. ALTER TABLE SALARY_HISTORY
    2. ALTER NEW_SALARY GENERATED ALWAYS AS
    3. (OLD_SALARY + OLD_SALARY * PERCENT_CHANGE / 100),
    4. ALTER SALARY_CHANGE COMPUTED BY
    5. (OLD_SALARY * PERCENT_CHANGE / 100);

See alsoSection 5.4.1, CREATE TABLE, Section 5.4.3, DROP TABLE, Section 5.3.1, CREATE DOMAIN

5.4.3 DROP TABLE

Used forDropping (deleting) a table

Available inDSQL, ESQL

Syntax

  1. DROP TABLE tablename

Table 5.4.3.1 DROP TABLE Statement Parameter

ParameterDescription

tablename

Name (identifier) of the table

The DROP TABLE statement drops (deletes) an existing table. If the table has dependencies, the DROP TABLE statement will fail with an execution error.

When a table is dropped, all its triggers and indexes will be deleted as well.

5.4.3.1 Who Can Drop a Table?

The DROP TABLE statement can be executed by:

  • Administrators

  • The owner of the table

  • Users with the DROP ANY TABLE privilege

5.4.3.2 Example of DROP TABLE

Dropping the COUNTRY table.

  1. DROP TABLE COUNTRY;

See alsoSection 5.4.1, CREATE TABLE, Section 5.4.2, ALTER TABLE, Section 5.4.4, RECREATE TABLE

5.4.4 RECREATE TABLE

Used forCreating a new table (relation) or recreating an existing one

Available inDSQL

Syntax

  1. RECREATE [GLOBAL TEMPORARY] TABLE tablename
  2. [EXTERNAL [FILE] 'filespec']
  3. (<col_def> [, {<col_def> | <tconstraint>} ...])
  4. [{<table_attrs> | <gtt_table_attrs>}]

See the CREATE TABLE section for the full syntax of CREATE TABLE and descriptions of defining tables, columns and constraints.

RECREATE TABLE creates or recreates a table. If a table with this name already exists, the RECREATE TABLE statement will try to drop it and create a new one. Existing dependencies will prevent the statement from executing.

5.4.4.1 Example of RECREATE TABLE

Creating or recreating the COUNTRY table.

  1. RECREATE TABLE COUNTRY (
  2. COUNTRY COUNTRYNAME NOT NULL PRIMARY KEY,
  3. CURRENCY VARCHAR(10) NOT NULL
  4. );

See alsoSection 5.4.1, CREATE TABLE, Section 5.4.3, DROP TABLE