10. Automating SQL TasksÂ
10.1. psql
The course has focused on carrying out database tasks in pgAdmin with SQL, and in QGIS. However full database functionality is also available through command line shell applications, where an experienced database user has full and better control over the database. Included in pgAdmin is the command line application Tools > PSQL Tool. In a normal PostgreSQL installation it is in the bin folder (e.g. C:\Program Files\PostgreSQL\13\bin\pgsql.exe), and can be called from a batch file.
Full documentation on psql commands can be found here.
Using psql from pgAdmin means that your database connection is already established. If you run pgsql from a command prompt, you can set the environment variables for the database connection to avoid having use the parameters each time - for example:
set PGHOST=localhost set PGUSER=pgis set PGPASSWORD=pgis set PGDATABASE=pg_training set PGPORT=5432
Alternatively, connect as in the following example (you will be prompted for a password):
psql -h localhost -d pg_training -p 5432 -U pgis -h (host) -d (database) -p (port default 9432) -U (username)
10.1.1. Running SQL in psql
To run an SQL command, use the -c option e.g.:psql -c "select count(*) from geometry_columns"
To execute a script, use the -f option:psql -f script/vml_count_features.sql
10.1.2. psql Meta-commands
psql includes a set of meta-commands, prefixed with a backslash, that are processed by psql itself. Meta-commands make psql useful for administration or scripting.
10.1.2.1. Basics
\h | get help with SQL |
\? | get help with PSQL |
\g OR ; | execute a query |
\q | quit |
\cd | change directory |
10.1.2.2. Input / Outputs
\echo [STRING] | write string to standard output |
\i [FILE] | execute commands from file |
\o [FILE] | send all query results to file |
10.1.2.3. Â Information
\d [OBJECTNAME] | describes tables, sequences views etc. |
\db | lists tablespaces |
\df | lists functions |
\dn | lists schemas |
\dp OR \z | lists tables, views sequences etc. |
\du | lists users |
\l | lists all databases |
\dt | lists tables |
Visit the PostgreSQL documentation for a more in depth guide to PSQL.
10.1.3. Transactions
Database administration often involves providing a coordinated set of commands to the database. An important strength to PostgreSQL is the transaction system, which is where most actions can be executed within a transaction. This allows the administrator to build a script that will either all succeed or all fail, which can be critically important on a production system.
A transaction wraps up a string of commands. Use the word BEGIN to start a transaction and then at the end of transaction include the COMMIT; to complete the transaction.
BEGIN; command 1; command 2; command 3; command 4; COMMIT; |
The transaction will only succeed if all four commands succeed.
To test a transaction before committing is to use ROLLBACK. Rollback enables you to run the transaction to see if it is successful but without making and changes to the database. Rollback will return the data to its original state.
BEGIN; command 1; command 2; command 3; command 4; ROLLBACK; |
The transaction may succeed but no changes are made. The whole script will fail if at any point, one of the commands gives an error or higher message. While a transaction is in operation it has hold of table locks so other uses cannot modify the table.
10.2. Functions
PostgreSQLÂ functions, also known as Stored Procedures, allow you to carry out operations that would normally take several queries and round trips in a single function within the database. Functions allow database reuse as other applications can interact directly with your stored procedures instead of a middle-tier or duplicating code.
There are a large number of in-build function in PostgreSQL and PostGIS. You will find them in the public schema.
Functions can be created in the language of your choice like SQL, PL/pgSQL, C, Python, etc.
N.B. PL/Python is only available as an "untrusted" language, meaning it does not offer any way of restricting what users can do in it and is therefore named plpythonu.
The most common language for creating function is PL/pgSQL
Here is the basic syntax of a function:
CREATE [OR REPLACE] FUNCTION function_name (arguments) RETURNS return_datatype AS $block_name$ DECLARE declaration; [...] BEGIN < function_body > [...] RETURN { variable_name | value } END; $block_name$ LANGUAGE plpgsql; |
Here is a very simple example:
CREATE OR REPLACE FUNCTION general.myfunction(a integer, b integer) RETURNS integer AS $$ BEGIN return a*b; END; $$ LANGUAGE plpgsql; |
Although function names do not need to be schema qualified, it is recommended that they are placed in a schema. If no schema is specified they will be created in the public schema.
To run the above function you would simply call the function via a select clause. e.g.
select general.myfunction(4,3); |
For more details on function see https://www.postgresql.org/docs/9.4/static/sql-createfunction.html
For more details on PL/pgSQL see https://www.postgresql.org/docs/9.4/static/plpgsql.html
10.3. Python Functions
You can write functions in python (and many other languages), first create the plpython3u
extension using CREATE EXTENSION IF NOT EXISTS plpython3u
. Then you can create a function using code like:
CREATE OR REPLACE FUNCTION fngetxlspts( param_filename text, OUT place text, OUT lon float, OUT lat float ) RETURNS SETOF RECORD AS $$ import xlrd book = xlrd.open_workbook(param_filename) sh = book.sheet_by_index(0) for rx in range(1,sh.nrows): yield( sh.cell_value(rowx=rx,colx=0), sh.cell_value(rowx=rx,colx=1), sh.cell_value(rowx=rx,colx=2) ) $$ LANGUAGE plpython3u VOLATILE; |
The lines above the first $$
define the function and it's input and output parameters, then the code between the two $$
marks is a string containing the python code.
First we import a module (xlrd
) that handles reading .xls
files from Excel. Then we open the workbook contained in the file who's name was passed in, and find the first sheet in the workbook. Finally, we loop through the rows (after skipping row 0 - the header) and return (or yield
) the value of the first three cells in the current row. Using yield
rather than return
means our function can keep track of which row it's on.
The PL/Python extension also provides a python module plpy
that provides access to the database. This allows you to query the catalog tables and run {{UPDATE}}s on tables based on the answers.
CREATE OR REPLACE FUNCTION add_wgs84( param_schema text, OUT result boolean ) RETURNS boolean as $BODY$ # get tables plpy.info("Param is "+plpy.quote_literal(param_schema)) prep = plpy.prepare("select f_table_name as tablename, f_geometry_column as geom_col, type as gtype from public.geometry_columns where f_table_schema = $1",["text"]) rv = prep.execute([param_schema]) plpy.info("got "+str(len(rv))+" rows") for r in range(0,len(rv)): plpy.execute('alter table '+param_schema+'.'+rv[r]['tablename']+' add column geom_wgs84 GEOMETRY('+rv[r]['gtype']+', 4326)') plpy.execute('update '+param_schema+'.'+rv[r]['tablename']+' set geom_wgs84 = st_transform('+rv[r]['geom_col']+', 4326)') return True $BODY$ LANGUAGE plpython3u VOLATILE; |
10.4. Triggers
A trigger is a set of actions that are run automatically when a specified change operation (SQL INSERT, UPDATE, DELETE or TRUNCATE statement) is performed on a specified table. Triggers are useful for tasks such as enforcing business rules, validating input data, and keeping an audit trail.
10.4.1. Create trigger
A trigger is a named database object that is associated with a table, and it activates when a particular event (e.g. an insert, update or delete) occurs for the table/views. The statement CREATE TRIGGER creates a new trigger in PostgreSQL. Here is the syntax :
10.4.1.1. Syntax
CREATE [ CONSTRAINT ] TRIGGER name { BEFORE | AFTER | INSTEAD OF } { event [ OR ... ] } ON table_name [ FROM referenced_table_name ] [ NOT DEFERRABLE | [ DEFERRABLE ] { INITIALLY IMMEDIATE | INITIALLY DEFERRED } ] [ FOR [ EACH ] { ROW | STATEMENT } ] [ WHEN ( condition ) ] EXECUTE PROCEDURE function_name ( arguments )
10.4.1.2. Parameters
Name | Description |
---|---|
name | The name of the trigger. A trigger must be distinct from the name of any other trigger for the same table. The name cannot be schema-qualified â the trigger inherits the schema of its table. |
BEFOREÂ | Determines whether the function is called before, after, or instead of the event. A constraint trigger can only be specified as AFTER. |
event | One of INSERT, UPDATE, DELETE, or TRUNCATE, that will fire the trigger. |
table_name | The name of the table or view the trigger is for. |
referenced_table_name | The (possibly schema-qualified) name of another table referenced by the constraint. This option is used for foreign-key constraints and is not recommended for general use. This can only be specified for constraint triggers. |
DEFERRABLE NOTÂ | The default timing of the trigger. |
FOR EACH ROWÂ | Specifies whether the trigger procedure should be fired once for every row affected by the trigger event, or just once per SQL statement. If neither is specified, FOR EACH STATEMENT is the default. |
condition | A Boolean expression that determines whether the trigger function will actually be executed. |
function_name | A user-supplied function that is declared as taking no arguments and returning type trigger, which is executed when the trigger fires. |
arguments | An optional comma-separated list of arguments to be provided to the function when the trigger is executed. The arguments are literal string constants. |
Triggers that are specified to fire INSTEAD OF the trigger event must be marked FOR EACH ROW, and can only be defined on views. BEFORE and AFTER triggers on a view must be marked as FOR EACH STATEMENT. In addition, triggers may be defined to fire for TRUNCATE, though only FOR EACH STATEMENT. The following table summarizes which types of triggers may be used on tables and views:
When | Event | Row-level | Statement-level |
---|---|---|---|
BEFORE | INSERT/UPDATE/DELETE | Tables | Tables and views |
TRUNCATE | â | Tables | |
AFTER | INSERT/UPDATE/DELETE | Tables | Tables and views |
TRUNCATE | â | Tables | |
INSTEAD OF | INSERT/UPDATE/DELETE | Views | â |
TRUNCATE | â | â |
Here is a simple example of trigger function.:
CREATE TABLE test_table( col1 text, col2 text, col3 date); CREATE OR REPLACE FUNCTION test_table_trig() RETURNS TRIGGER as $$ BEGIN new.col3 = current_date; return new; END; $$ LANGUAGE plpgsql; |
Now we can create the trigger which will fire at the time of execution; the event as specified in the trigger for the associated tables.
CREATE TRIGGER test_table_trigger BEFORE INSERT ON test_table FOR EACH ROW EXECUTE PROCEDURE test_table_trig(); |
In the above trigger function there is new keyword 'NEW' which is a PostgreSQL extension to triggers. There are two PostgreSQL extensions to trigger 'OLD' and 'NEW'. OLD and NEW are not case sensitive.
Within the trigger body, the OLD and NEW keywords enable you to access columns in the rows affected by a trigger
In an INSERT trigger, only NEW.col_name can be used.
In a UPDATE trigger, you can use OLD.col_name to refer to the columns of a row before it is updated and NEW.col_name to refer to the columns of the row after it is updated.
In a DELETE trigger, only OLD.col_name can be used; there is no new row.
A column named with OLD is read only. You can refer to it (if you have the SELECT privilege), but not modify it. You can refer to a column named with NEW if you have the SELECT privilege for it. In a BEFORE trigger, you can also change its value with SET NEW.col_name = value if you have the UPDATE privilege for it. This means you can use a trigger to modify the values to be inserted into a new row or used to update a row. (Such a SET statement has no effect in an AFTER trigger because the row change will have already occurred.)
Here is another example of a trigger, which writes to an audit table.
-- table of employees CREATE TABLE employees( id serial primary key, first_name varchar(40) NOT NULL, last_name varchar(40) NOT NULL ); -- audit table for last name changes CREATE TABLE employees_audits( id serial primary key, employee_id integer, last_name varchar(40) NOT NULL, changed_on timestamp ); -- trigger function to log last name changed in audit table CREATE OR REPLACE FUNCTION log_last_name_changes() RETURNS trigger AS $BODY$ BEGIN IF NEW.last_name <> OLD.last_name THEN INSERT INTO employees_audits(employee_id,last_name,changed_on) VALUES(OLD.id,OLD.last_name,now()); END IF; RETURN NEW; END; $BODY$ language plpgsql; -- trigger CREATE TRIGGER last_name_changes AFTER UPDATE ON employees FOR EACH ROW EXECUTE PROCEDURE log_last_name_changes(); |
For more information on triggers and trigger functions see https://www.postgresql.org/docs/9.4/static/plpgsql-trigger.html
10.5. Python
To use PostGIS from a Python application you need the Psycopg adapter so you can access PostgreSQL from Python.
10.5.1. Psycopg
Psycopg is the most popular PostgreSQL database adapter for the Python programming language. Its main features are the complete implementation of the Python DB API 2.0 specification and the thread safety (several threads can share the same connection). It was designed for heavily multi-threaded applications that create and destroy lots of cursors and make a large number of concurrent INSERT
s or UPDATE
s.
Psycopg 2 is mostly implemented in C as a libpq wrapper, resulting in being both efficient and secure. It features client-side and server-side cursors, asynchronous communication and notifications, COPY TO/COPY FROM
 support. Many Python types are supported out-of-the-box and adapted to matching PostgreSQL data types; adaptation can be extended and customized thanks to a flexible objects adaptation system.
Psycopg 2 is both Unicode and Python 3 friendly.
On Windows machines use the following to install psycopg2.
pip install psycopg2-binary
Very simple examples of using Psycopg2
import psycopg2 conn = psycopg2.connect("dbname='pg_training' user='pgis' host='localhost' password='pgis'") cur = conn.cursor() cur.execute("""Select datname from pg_database;""") rows = cur.fetchall() for row in rows: ... print " ", row[0] ... template1 template0 postgres aileen cur.execute("""SELECT general.myfunction(4,3);""") cur.fetchall() |
10.5.2. What about geometry?
For a simply point you can easily use the appropriate PostGIS functions e.g.
>>> import psycopg2 >>> conn = psycopg2.connect('...') >>> curs = conn.cursor() >>> cur.execute("select st_astext(ST_SetSRID(ST_MakePoint(32, 34, 0),4326));") >>> cur.fetchall() [('POINT Z (32 34 0)',)] >>> |
For more complicated geometries, such as LineString and Polygon geometries, you can handle them with a number of tools including
Python GDAL/OGR for reading and writing geo-spatial data (https://pcjericks.github.io/py-gdalogr-cookbook/ )
Shapley for analysing and manipulating geo-spatial data (https://pypi.python.org/pypi/Shapely )
GeoDjango (https://docs.djangoproject.com/en/1.10/ref/contrib/gis/tutorial/ )
GeoAlchemy 2 (http://geoalchemy-2.readthedocs.io/en/latest/ )
10.5.2.1. Useful references:
Python Geospatial Development, Erik Westra | Geoprocessing with Python, Chris Garrard |
---|---|
10.6. Simple example using Shapely
Shapely does manipulating and analyzing data. Itâs based on GEOS, the libraries used by PostGIS. With Shapely, you can do things like buffers, unions, intersections, centroids, convex hulls.
Shapely, then passes them through psycopg2 as hex-encoded WKB. Note that Shapely 1.3 or later is required to handle the export of 3D geometries with the wkb_hex
 property.
import psycopg2 from shapely.geometry import LineString from shapely import wkb conn = psycopg2.connect("dbname='pg_training' user='pgis' host='localhost' password='pgis'") curs = conn.cursor() # Make a Shapely geometry ls = LineString([(2.2, 4.4, 10.2), (3.3, 5.5, 8.4)]) ls.wkt # LINESTRING Z (2.2 4.4 10.2, 3.3 5.5 8.4) ls.wkb_hex # 0102000080020000009A999999999901409A999999999911406666666666662440666666... # Send it to PostGIS curs.execute('CREATE TEMP TABLE my_lines(geom geometry, name text)') curs.execute( 'INSERT INTO my_lines(geom, name) VALUES (ST_SetSRID(%(geom)s::geometry, %(srid)s), %(name)s)', {'geom': ls.wkb_hex, 'srid': 4326, 'name': 'First Line'}) # Fetch the data from PostGIS, reading hex-encoded WKB into a Shapely geometry curs.execute('SELECT name, geom FROM my_lines') curs.fetchall() # Fetch the data from PostGIS, reading hex-encoded WKB into a Shapely geometry curs.execute('SELECT name, geom FROM my_lines') for name, geom_wkb in curs: geom = wkb.loads(geom_wkb, hex=True) print('{0}: {1}'.format(name, geom.wkt)) # First Line: LINESTRING Z (2.2 4.4 10.2, 3.3 5.5 8.4) |
Of course this can be accomplished by sending the geometry's WKT, however since it is converted to text, it is lossy and may reduce angstroms of precision. Transferring geometries as hex-encoded WKB is lossless, and preserves the exact precision of each coordinate.