### Linuxcbt PostgreSQL Edition ###
Features:
1.Object-relational Database Management System(ORDBMS)
a.Objects cna be related in a hierarchy:parent-Child
2.Transactional RDBMS:
Note:transactional statements must execute ALL or NONE
a.SQL statements have implicit BEGIN:COMMIT; statements
b.SQL statements may also have explicit BEGIN;COMMIT; statements
3.Note/Features:
a.Developed Originally @ UC Berkeley
4.One-process per connection - auto-spawns per new connection
a.managed by master process - 'postmaster'
5.Processes use only ONE CPU/Core
a.Note:OS/Distro may spawn new connection on a different CPU/core
6.Multiple helper processes,which apper as 'postgres' instances,run always
a.Stats collector
b.Background writer
c.Autovacuum - clean-up/space reclaimer
d.WALsender - write ahead log
7.Max DB Size Unlimited - Limited by OS & available storage
Note:Consider deploying on 64-bit platforms
8.Max Table Size:32TB - Stored as multiple 1GB files
9.Max Row Size:400GB
10.Max Column Size:1GB
11.Max Indexes on a table:Unlimited
12.Max Identifier(DB objects) (table|column names,etc.):63-bytes
Note:The limitation is extensible via source code
13.Default Listener:TCP:5432
a.You may install PostgreSQL as a non-privileged user
14.Users are distinct from OS users - i.e,MySQL
15.Users are shared across DBs
16.Inheritance
a.Tables lower in hierarchy may inherit columns from higher tables
b.Caveat:No Unique constraints or foreign keys support
17.Case-insensitive commands - sans double quotes i.e,'select * from Syslog;'
18.Case-sensitive commands - with double quotes i.e,'select * from "Syslog"';
19.Three primary config files:$POSTGREROOT/data/*.conf
a.'pg_hba.conf' - controls host/user/DB connectivity
b.'postgresql.conf' - general settings
c.'pg_ident.conf' - user mappings
20.Integrated Log Rotation|Management - Log Collection - postgresql.conf
a.Criteria:Age | Size
### Installation ###
Features:
1.Download bin file from Enterprise DB
Tasks:
1.Install
Note:You may optionally indicate the data files be stored independently
of the source tree
Note:A 'Database Cluster' - is simply the management of more than 1 DB
Note:default is to start the RDBMS post-installation
2.Explore the footprint
a.'psql' - terminal monitori.e,akin to 'mysql'
b.'createdropdb' - creates|drops DB
c.'create|dropuser' - creates|drops users
d.'postgres' - server daemon
e.'data' - top-level config files/log files
f.'data/pg_log' - log files(default is:sTDERR)
g.'data/pg_xlog' - write Ahead Log(WAL) maintains changes to DB files at all times
h.data/postmaster.opts - contains start-up options
i.'/etc/init.d/postgresql-9.3' - INITD manager
Note:Files,except INITD file,are contained within:'/opt/postgresql/version hierarchy
'
3.Provide access to docs via Apache
a.'ln -s /opt/postsql/9.3/doc/postgresql/html' ./Linuxcbt/postgredocs
4.update:$linuxcbt/.bashrc to reflect /opt/postgresql/8.3/bin - to access binaries
Note:PostgreSQL clients default submitting the currently logged in user'sname as the DB user name
Note:A workaround for this is to export the PGUSER variable,setting it to an existing DB user
'export PGUSER=postgres'
### 'psql' ###
Features:
1.(Non)Interactive usage -i.e,'mysql' terminal monitor
2.Command history - Up|Down arrows
3.Tab completion
4.Commands terminate with semicolon and may wrap lines and have whitspace separators
5.Defaults to supplying the current logged-in user
Tasks:
1.Explore 'psql'
a.'psql --version'
b.'psql -l -U postgres' - list DBs and exit
Note:PostgreSQL installs 3 default DBs:
1.'postgres' - contains user accounts DB,etc
2.'template0' - vanilla,original DB
3.'template1' - copy of template0,and may be extended,and is used to generate new DBs
c.'psql' enter interactive mode
c1.trailing '#' indicate super user:'postgres'
d.'\h' - returns SQL-specific help
\h create
e.'\?' - returns 'psql'-specific help - i.e,usable metasequences
f.'\l[+]' - returns list DBs
\l+
g.'\du[+]' - returns list of users in system DB
h.'\!' - returns users to the shell,type 'exit' to return to psql
i.'\! command' - executes specific command - non-interactively
Note:Typical $SHELL semantics apply
j.'\i sql_file_name' - executes command(s) in the file - i.e,'psql' or SQL commands
Note:Multiple commands can be run with one reference,Terminate with ';',use space between commands;
psql -U postgres -f show_version.sql
k.'\c DBNAME [REMOTE_HOST]' - connects to different DB and optionally host
Note:current DB is echoed in the prompt,i.e,postgres=# || template1=#
l.\d[s][+] - reveals tables,views,sequence and various DB objects
m.'\q' - quits
### Access Controls ###
features:
1.Users - Roles -Roles are users or groups
2.Config Files
a.'pg_hbg.conf'|'pg_ident.conf'|'postgresql.conf'
3.Central accounts DB shared ALL DBs - accounts MUST be unique
4.Default setup,includes:1-User - 'postgres' - Super user
5.Privileges are managed with:
a.GRANT | REVOKE
b.ALTER
c.CREATE | DROP USER|ROLE - SQL Statements(Key Words)
d.'createuser|dropuser' - commands - wrappers to SQL statements
6.DB Object creators own those objects and can assign privileges to them
7.To change DB object ownership use :ALTER - SQL key word
8.Special user named:'public' - grants assigned privileges to All system users
a.'PUBLIC' is a special group,to which ALL users are members
Tasks:
1.Create another super user
a.'\du' - enumerates current users|roles
b.'createuser -e -s -U postgres linuxcbt' - echoes SQL commands to STDOUT
2.Set 'linuxcbt' password '\password linuxcbt' - permits setting of user's password
Note:Caveat:Upon connection to postgres,the client 'psql' attempts to connect to a DB named $PGUSER
psql -U linuxcbt template1
3.Drop newly created super user:'linuxcbt'
a.'dropuser -e -U postgres linuxcbt' - removes user from DBMS
sql command:'drop role linuxcbt';
3.Examine remote,TCP-based connectivity
change /opt/PostgreSQL/9.3/data/pg_hba.conf add remote access
host all all 192.168.1.0/24 md5
### Logging ###
Features:
1.Three types of logs supported:
a.'stderr' (Default)
b.'csvlog' - import into spreadsheet | DBs
c.'syslog'
2.Controlled via:$POSTGREROOT/data/postgresql.conf
3.Simultaneous logging
4.Ability to control verbosity
5.Automatic log rotation based on criteria:age | size
6:Logs,handled by the included logger(stderr,csvlog) are stored in $POSTGREROOT/data/pg_log
Note:Syslog-handled messages are routed according to syslog rules:/etc/syslog.conf
7.The first process launched by master process
Tasks:
1.Explore $POSTGREROOT/data/postgresql.conf
2.Configure syslog
a.Update syslog configuration for :'LOCAL0' facility - consider update logrotate
b.Update:'$POSTGREROOT/data/postgresql.conf' - 'stderr,syslog'
vim /etc/syslog.conf
invoke-rc.d rsyslog restart
3.Configure csvlog
a.append 'csvlog' to log_destination VAR:'$POSTGREROOT/data/postgresql.conf'
Note:Caveat:Syslog is UDP based and is subject to loss of messages
### Common Data Types ###
Features:
1.Allow us to control the type of data on a per column basis
Types:
Numeric:
a.'smallint' - 16-bits (2-bytes) - whole numbers
unsigned = 0 - 2**16(65636)
signed: = -2**15(-32768) - 2**15(32768)
b.'int' - Integer 32-bit(4 bites) - whole number
unsigned = 0 - 2**32(4294967296)
signed: = -2**31(2147483648) - 2**31(2147483648)
c.'bigint' - Big Integer 64-bits (8 bytes) - whole numbers only
unsigned = 0-2**64(18446744073709551616)
signed:-2**63(-9223372036854775808) - 2**63(9223372036854775808)
d.'numeric[(precision,scale)]'
d1.precision = sig figs
d2.scale = number of values to the right of the decimal point
Note:'numeric' - sans precision or scale supports up to 1000 digits of precision
e.'real' - 32-bits - variable - 6 decimal digits of precision
f.'double' - 64-bits(8 bytes) - variable - 15 digits of precision
g.'serial' - 32-bits (2**31) - auto-incrementing
h.'bigserial' - 64-bits (2**63) - auto-incrementing
Money:
a.'money' - 64-bits - 2**63 signed i.e,-9EB - 9EB
can take '4$'-like values
Strings - Text
a.Text - varchar - unlimited - preferred character storage type within PostgreSQL
b.'char(n)' - fixed-length,blank-padded if value stored is < '\n' length
b1.i.e,'char(9)' - 'linuxcbt' -> stored as 'linuxcbt '
Note:char(n) truncates values that are > 'n' length
b2.i.e.'char' -> 'char(1)' - effectively becomes a 1-character field
c.'varchar(n)' - variable length with 'n' limit,if 'n' is present,Does NOT blank-pad
c1.i.e,'varchar(10)' - 'linuxcbt' -> stored as:'linuxcbt'
c2.i.e,'varchar' -> variable length - Does NOT blank-pad
Note:use 'text' or 'varchar' when storing strings
Date & Time - uses 'Julian Dates (from 4713 BC) - 10*6+ years ahead
' - in date calculation
a.date - 32-bits - date only
b.time - 64-bits - defaults to time 'without' time zone - microsecond precision
c.'time with time zone' - 96-bits (12 bytes) - date & time with time zone - microsecond precision
d.'timestamp with time zone' - 64 bits -...
e.'timestamp without time zone' - 64 bits -...
f.'interval' - 96-bits - range of time - microsecond
Boolean - 8-bits - True(on) | False(off)
Geometric Types - lines,circles,polygons,etc.
Network Address Types:
a.'cidr'- 7 or 19-bypte - IPv4 and IPv6 networks,i.e,'192.168.75.0/24' || 'fe80::a00:27ff:feb6:193f/64'
b.'inet' - 7 or 9-bytes - IPv4 and IPv6 hosts and networks
c.'macaddr' - 48-bits,i.e, 08:00:27:b6:19:3f, 0800.27b6.193f(cisco router),etc
XML
Arrays
et cetera
### create ###
Features:
1.Limited to 63 characters for the definition of objects
2.Identifiers (DB object) MUST begin with alpha characters
3.Used to create Schemas,DBs,Tables,Indexes,Functions,etc.
Note:PostgreSQL Hierarchy:
1.DB
2.Schema(s)(Optional) - default schema is named 'public'
3.Objects(Tables,Functions,Triggers,etc)
Note:All DBs haves; 'public' and 'pg_catalog' schema
\dS
Note:ALL users|roles have:'CREATE' & 'USAGE' access to the 'public' schema for ALL DBs
Note:Create distinct schemas if security beyond 'public' necessary
Tasks:
1.DB Creation
a.Crate a user named 'linuxcbt' with 'CREATEROLE CREATEDB' rights
a1.'createuser -e -U postgres linuxcbt'
createuser -e -U postgres -r -s linuxcbt
CREATE ROLE jack4 NOSUPERUSER NOCREATEDB NOCREATEROLE INHERIT LOGIN;
createuser -e --interactive -W
CREATE ROLE jack6 NOSUPERUSER CREATEDB CREATEROLE INHERIT LOGIN;
b.Crate DB named 'linuxcbt';
b1.'CRATE table linuxcbt';
c.crate table linuxcbtmessages;
c1.'create table linuxcbtmessages(data data);'
d.Crate a user named 'linuxcbt2' with USAGE rights
d1.'create role linuxcbt2 NOSUPERUSER LOGIN INHERIT;'
\d tablename
e.create a schema names:'logs'
e1.'crate schema logs';
f.create a table named 'linuxcbtmessages' within the schema 'linuxcbt.logs'
f1.create table logs.linuxcbtmessages(date date);
f2.'\d linuxcbt.logs.linuxcbtmessages' - confirms the description of the table
g.Check whether user 'linuxcbt2'
### DROP ###
Features:
1.Removes objects:DBs,Schemas,Tables,Functions,Triggers,etc.from ORDBMS
2.available from the $SHELL and within sql interpreter:'psql'
Tasks:
1.Drop DB 'linuxcbt2'
Note:Objects that currently in use will NOT be dropped by default
a.'dropdb linuxcbt'
Note:Dropping DBs will remove all sub-objects,including:,but not limit to:
a.Schemas
b.Tables
c.Functions
d.Triggers,etc.
2.Drop Tables
a.DROP TABLE table_name - removes table if current user is owner or SUPERUSER
b.'DROP TABLE linuxcbtmessages' - as user:'linuxcbt' - fails due to lack of permissions
\h drop
c.CREATE DATABASE TEST TEMPLATE linuxcbt - templates DB named 'linuxcbt'
Note:No active session must be ongoing in ordr to complete process to work to ensure consistency with duplicated DB
Note:Objects created within the 'public' schema
d.Drop schema 'linuxcbt.logs'
d1.drop schema logs; - failes because there is a dependent tale:'linuxcbtmessages'
d2.drop schema logs cascade - forces recursive removal of objects
3.Re-create structure and drp
e1.'CREATE DATABASE LinuxCBT'
e2.'\c LinuxCBT && CREATE SCHEMA LinuxCBT.LOGS'
e3.'CREATE TABLE LinuxCBT.logs.linuxcbtmessages(date date)'
e4.'drop database linuxcbt'
### ALTER ###
Features:
1.Changes object [DB|Schema|Table|Index|etc.] - Name|Structure|Owner
Tasks:
1.Confirm our environment by ensuring requisite objects:
2.Change DB Name
a.'ALTER DATABASE linuxcbt rename to 'LINUXCBT2'
Note:ALTER should be used Sans connections to target objects
3.Change DB Ownership
a.'ALTER DATABASE linuxcbt OWNER TO linuxcbt' - changes ownership to role:'linuxcbt2'
4.Test ability to DROP DB as new owner
a.'DROP DATABASE linuxcbt;'
5.Create and rename table 'linuxcbt2message'
a.'create table linuxcbt2message(date date);' - create table 'public.linuxcbt2message'
b.'ALTER TABLE linuxcbt2message RENAME TO messages;'
6.Alter Table structure 'messages'
a.'ALTER TABLE messages ALTER COLUMN date SET DATA TYPE timestamp;'
Note:Structural(columnar) changes may result in data lose if target column type does NOT support source column data
b.'ALTER TABLE messages ADD dent text;' -ADDS sequentially a new column to table
Note:Column names MUST be unique and may not be added more than once
c.'ALTER TABLE messages DROP COLUMN IF EXISTS ident;' - removes column named:'ident' if exists
Note:Be forewarned,that the dropping of a column WILL remove existing DATA in the column
7.ALTER existing ROLE
a.'ALTER role linuxcbt SUPERUSER;' - make user linuxcbt a SUPERUSER
Note:This will ONLY work if you execute as a SUPERUSER
b.'ALTER ROLE linuxcbt RENAME to linuxcbt3' - renames user:'linuxcbt' to 'linuxcbt3'
Note:This will unset the user's MDT PASSWORD
Note:this will also update ownership of objects,i.e,DB::Test is now owned by 'linuxcbt3'
c.'ALTER ROLE linuxcbt3 RENAME to linuxcbt'
### Constraints(约束)###
Features:
1.Enforce storage requirements per table || column
2.Maybe applied on column
3.Multiple constraints may bey bound to a single column
4.Optionally,constraints may be defined at the table for one or more columns
5.Default column rule is to accept NULLs
Data Types - basic constraint
a.Restricts permit column values
i.e,'date','smallint','char(9)',etc.
NOT-NULL | NULL Constraints
a.Define a table using NOT NULL
a1.create table messages date date NOT NULL);
b.alter table adding a new column (id) with constraint NULL
b1.'alter table messages add id int NULL'
Unique constraint - applies to any type of column:i.e,'int','numeric','text',etc
a.define a table with unique id column
a1.'create table messages (date date,id bigint unique)'
Note:The creation of UNIQUE constraints generates implicit btree indexes on column(s)
b.Define table with multiple unique columns
b1.'create table messages (date date,id biging,message text,unique(id,message))'
Note:This ensures that the combination if 'id' && 'message' is unique
Sample Records that do not break the UNIQUE constraint:
2010-10-14 1 message
2010-10-14 2 message
2010-10-14 3 message
Sample Records that do reak the UNIQUE constraint:
2010-10-14 1 message
2010-10-14 1 message
2010-10-14 1 message
Primary Key Constraint - combination of 'UNIQUE' & 'NOT NULL' constraints
a.create a table with primary key constraint on 1 column
a1.'create table message (date date,id numeric primary key)'
b.create a table with primary key constraint on 2 column
b1.'create table message (date date,id numeric ,message text,primary key(id,message))'
Note:Standard SQL recommends tht each table contain a primary key
Foreign Key Constraint - Links Tables - Referential Integrity
a.create messages table as parent table
a1.'create table message(date date,id int primary key);'
b.create subordinate(下级) table to categorize the messages in parent table
b1.'CREATE TABLE messages_categories (id int REFERENCES messages(id),category text)'
Check Constraint - confirm column values based on Boolean criteria:
"CHECK (expr)"
a.Ensure that (id) contains values greater than 0
a1.'create table message(date date not null,id numeric check(id > 0));'
b.Create the same constraint with a name
Note:If unnamed,PostSQL will auto-name the constraint
b1.'create table message(date date not null,id numeric constraint positive_id check(id > 0));'
c.Create check constraint which summarizes ALL rules for ALL columns
c1.'create table message(date date,id numeric check(date is not null and id > 0 and id is not null));'
### INSERT ###
Features:
1.Populates tables via various methods
2.Populates inserts left-to-right
Usage:
1.Insert into table with precise columns of columns
a.'insert into messages values('2010-10-10','1')'
2.Insert using specified field(s)
a.'insert into messages(date) values('2010-10-10')'
3.Insert columns than are defined in the table - will not work
a.'insert into messages values('2010-10-10',1,'log messages')'
4.Insert multiple records wholesale
a.'INSERT INTO message VALUES('2014-03-01',1),('2014-03-02',2),('10/24/2010',3),('2014-03-03',3),('2014-03-05',5),('2014-03-06',6);'
Note:The date formate in record 3 causes the entire transaction to fail,due to implicit:
begin & commit statement;
5.Test Foreign Key Constraint
a.'CREATE TABLE messages_categories (id int REFERENCES messages(id),category text)'
Note:Because messages table does not have UNIQUE or PRIMARY KEY constrants on (id) column,The Foreign key constraint will fail
Note:Rectify by defining a Primary Key on messages table OR insert unique values into (id)
Note:Postgresql raises error and denies creation of subordinate table
b.Insert data into dependent table
b1.'insert into messages_categories values('4','VSFTPD')' - fail constraint.
b2.'insert into messages_categories values('3','VSFTPD')' - passes constraint
Note:Foreign key constraint need not be based on a numeric
6.Test Primary Key Constraint
a.'INSERT INTO messages values('10/24/2010',3);' - fails constraint
b.'INSERT INTO messages values('10/24/2010',4);' - passes constraint
Note:Summarizes both:UNIQUE and NOT NULL constraints
### COPY ###
Features:
1.'Server-side command,unlike:'\copy' which is client-side
2.Wholesale Inserts(imports)|Export from | to a file
3.File MUST be on the server
4.File MUST be viewable by the 'postgres' user
5.Uses absolute $PATH to reference the file
6.Defaults to importing based-on Tab separator delimiter
7.Able to copy results of selected query
8.Does not work with VIEWS but will work with SELECT of VIEW
Tasks:
9.Append recores to table
1.Generate an import file
a.for i in `seq 100`;do echo `date +%F` $i;done > linuxcbt/messages.data;
2.import data
a.'copy messages from /root/messages.data delimiter " ";'
Note:truncate table when necessary to clear data
TRUNCATE table messages cascade - removes
copy messages from '/tmp/messages.data' delimiter ' ';//use single quote to specify the source data file
b.Vary delimiter
b1.'awk'{print %1","$2}' messages.data > messages.data.csv - format output with comma delimiter
b2.truncate messages cascade
b3."copy messages from '/tmp/messages.data.csv' delimiter ','";
3.Export Data
a.'COPY messages to 'filename' DELIMITER ';';' - export with semicolon delimiter
Note:Ensure that user:'postgres' may write to target directory
Note:Export does NOT append|overwrite redirect,rather,it clobbers (overwirte) target file
### SELECT ###
Features:
1.Performs queries:i.e,calculation,system stats,data retrieval
2.Retrieves data from objects:table(s),view(s),etc
Usage:
1.'SELECT * FROM * messages;' - returns ALL records;
2.'SELECT rolname,rolcreaterole FROM pg_roles;' - returns just those 2 columns
3.'SELECT rolcreaterole,rolname FROM pg_roles;' - reversed
4.'SELECT rolname as r,rolcreaterole as rr from pg_roles' - constructs alias for columns
5.'SELECT * from pg_roles WHERE rolname like "%linuxcbt%"' - Simple string comparison
6.'SELECT * FROM messages ORDER BY id desc' - Changes sort order on (id) column
7.'SELECT distinct date FROM messages' - filters unique values per column,by not returning redundancies
8.LIMITS & OFFSETS
Features:
Ability to extract a subset of records using SELECT
Note:Use 'ORDER BY' clause when using 'LIMIT' to influence sort order because SQL does not guarantee sort order
a.'SELECT * FROM MESSAGES ORDER BY id ASC LIMIT 10;' returns first 10 records;
b.'SELECT * FROM messages ORDER BY id LIMIT 10;' returns last 10 records;
c.'SELECT * FROM messages ORDER BY id LIMIT 10 OFFSET 10;' returns records 11 - 20;
d.'SELECT * FROM messages ORDER BY id LIMIT 10 OFFSET 9;' returns records 10 - 19;
e.'SELECT * FROM messages ORDER BY id LIMIT 11 OFFSET 9;' returns records 10 - 20;
f.'SELECT * FROM messages ORDER BY id desc LIMIT 10 OFFSET 10;' returns records 90 - 81;
### JOIN ###
Features:
1.aggregates related data across tables:2 or more
2.Default CROSS JOIN - i.e,'select * from messages,messages_categories'
Note:'CROSS JOIN' produces 'N*M' rows of data
Tasks:
1.Populates the 'messages_categories' dependency (lookup) table
a.'INSERT INTO messages_categories VALUES(1,'VSFTPD'),(2,'SSHD'),(3,'XINETD')'
2.Standard join using 'WHERE' clause:
a.'SELECT * FROM messages AS m,messages_categoris AS mc where m.id = mc.id;' - INNER JOIN using 'where' Clause
b.Create JOIN with a THIRD table
b1.'CREATE TABLE messages_alerts (id int NOT NULL,alert text NOT NULL)';
b2.'INSERT INTO messages_alerts VALUES(1,'DEBUG'),(2,'INFORMATIONAL'),(3,'WARNING');'
b3.'SELECT * FROM messages AS m,messages_categoris AS mc,messages_alerts as ma where m.id = mc.id and m.id = ma.id;'
b4.'SELECT m.id,date,category,alert FROM messages AS m,messages_categoris AS mc,messages_alerts as ma where m.id = mc.id and m.id = ma.id;' - returns one (id) column in result
3.INNER JOINs
a.'SELECT * FROM messages as m INNER JOIN messages_categoris as mc ON m.id = mc.id' - Functionally equivalent to JOIN with 'WHERE' clause
b.'SELECT * FROM messages as m INNER JOIN messages_categoris as mc using (id)' - Same as above but suppresses duplicate (id) column
c.'SELECT m.id,m.date,category FROM messages as m INNER JOIN messages_categoris as mc using (id);'
4.LEFT JOINs
Features:Matches (id) from left table and includes only (id) from right table that match
a.'SELECT * FROM messages as m LEFT JOIN messages_categoris as mc on m.id = mc.id'
b.'SELECT * FROM messages as m LEFT JOIN messages_categoris as mc USING(ID)'
c.'SELECT m.id,m.date,mc.category FROM messages as m LEFT JOIN messages_categoris as mc USING(ID)' - Same as above but suppresses duplicate (id) column
5.RIGHT JOIN
Features:Matches (id) from right table and includes ONLY(id) from left table that match
a.'SELECT * FROM messages as m RIGHT JOIN messages_categoris as mc on m.id = mc.id'
b.'SELECT * FROM messages as m RIGHT JOIN messages_categoris as mc USING(ID)'
c.insert a new category into table messages_categoris
c1.'INSERT INTO messages_categoris VALUES(101,'UNKNOWN')';
Note:Foreign Key Constraint prohibits creation of values in 'messages_categor' that DO NOT exist in table:'messages'
d.'SELECT m.id,m.date,mc.category FROM messages as m RIGHT JOIN messages_categoris as mc USING(ID)' - Same as above but suppresses duplicate (id) column
e.'SELECT m.id,m.date,mc.category FROM messages as m INNER JOIN messages_categoris as mc USING(ID) ORDER BY category;' - Same as above but suppresses duplicate (id) column and orders by category ASC
### VIEWS ###
Features:
1.Presents consolidated query driven interfaces to data
2.They may be based on 1 or more tables
3.Not a real objects rather,query is executed upon invocation
4.Supports temporary VIEWS, - lasts for session duration
5.Column names are auto-derived from the query
Tasks:
1.Define view based on inner join messages and messages_categoris
a.'CREATE VIEW messagesandcategories as SELECT * FROM messages INNER JOIN messages_categoris USING(id);' - creates permanent view of inner joined tables
b.'SELECT * FROM messagesandcategories' - execute the VIEW
2.Insert records to both messages & messages_categoris & re-query view
a.'INSERT INTO messages_categoris VALUES(4,'KERNEL');';
3.Use Aliases
a.'SELECT id AS I,date AS d,category AS c FROM messagesandcategories ORDER BY i;';
4.Update VIEW
a.'CREATE OR REPLACE VIEW messagesandcategories as SELECT id AS I,date AS d,category AS c FROM messages INNER JOIN messages_categoris USING(id);' Creates or updates VIEW
a1.drop view messagesandcategories;
b.'CREATE OR REPLACE VIEW messagesandcategories(i,d,c) as SELECT id ,date ,category FROM messages INNER JOIN messages_categoris USING(id);' Creates or updates VIEW but fails to use ALIAS
5.Create TEMP VIEW
a.'CREATE TEMP VIEW messagesandalerts (i,d,a) AS SELECT messages.id,date,alert from messages INNER JOIN messages_alerts using(id);'
\dS messages*
\dS+ messages*
Note:TEMP VIEWS are not assigned to the default:'public' schema
Note:TEMP VIEWs are not other sessions
6.Create TEMP VIEW on a single table
a.'CREATE TEMP VIEW messagesdates AS SELECT date from messages;'
### Aggregate(聚合) ###
Features:
1.Compute single results(scalars) from multiple input(row)
2.Values are computed after 'WHERE' has selected rows to analyze
a.Consequently,Aggregates may not be used within:'WHERE' clause
b.However,aggregates Can be used with: 'HAVING' clause
3.The Having clause is calculated post-aggregate computation(s)
Examples:
1.'SELECT count(*) FROM messages;' - counts rows
a.'SELECT count(date) FROM messages;' - counts rows as well
2.'SELECT sum(id) from messages' - Adds values from each row
3.'SELECT avg(id) from messages' - Averages values across ALL rows
4.'SELECT min(id) from messages' - Finds min values across ALL rows
5.'SELECT max(id) from messages' - Finds max values across ALL rows
Note:'MIN' and 'MAX' works with both numeric and date types
6.'SELECT min(id),max(id),count(id) from messages;' - Queries multiple aggregates simultaneously
Examples with WHERE ,GROUP & HAVING
7.'SELECT date,min(id) from messages GROUP BY date' - groups min(id) by 'date'
Note:When references non-aggregate and aggregate columns in the same query,use the 'GROUP BY' clause to sort aggregate data by non-aggregate data
8.'SELECT date,min(id) from messages WHERE id < 51 GROUP BY date' -- restricts aggregate 'min(id)' to rows containing (id) < 51
9.'SELECT date,min(id) from messages WHERE id < 51 GROUP BY date HAVING min(id) < 30 ' - Post aggregate,restricts returned results to (id) < 30;
10.'SELECT date,min(id) ,max(id) from messages WHERE id < 51 and id > 40 GROUP BY date' - Extract between 40>(id)<51
Boolean aggregate
1.'ALTER TABLE messages add enabled boolean NOT NULL DEFAULT false;' -Extends table to include a boolean column 'enabled'
2.'SELECT bool_and(enabled) FROM messages;' - returns true if ALL are true;
3.'SELECT bool_or(enabled) FROM messages;' - returns true if 1 or more are true;
String Aggregates
1.'ALTER TABLE messages ADD message text NOT NULL DEFAULT 'syslog message''
2.'SELECT string_agg(message,' ') FROM messages;' - concatenates string(text) values with single space delimiter
### UPDATE ###
Features:
1.Update table(s) based on criteri(on|a)(条件)
2.Requires:name of table,column(s) to update,criteri(on|a) (WHERE) clause
3.Updates table and sub-tables unless 'ONLY' keyword is used
4.Output indicates number of rows updated
5.WILL UPDATE ALL RECORDS if missing criteri(on|a)
Examples:
1.'UPDATE messages SET enabled='f' WHERE id = 100' - updates 1 record to true
2.'UPDATE messages SET enabled='t' WHERE enabled = 'f'' - updates many records to false
3.'UPDATE messages SET enabled='f' WHERE id >= 50' - updates records with id >= 50 to false
4.'UPDATE messages SET enabled=1 WHERE id >= 50' - updates records with id >= 50 to true
5.'UPDATE messages SET enabled=0,message='new message' WHERE id = 100' - updates multiple columns with id = 50
6.'UPDATE messages SET enabled = DEFAULT;' - resets columns 'enabled' to default value for ALL rows
7.'SELECT * from messages where message <> 'syslog message'; - checks for rows where column 'message' is not 'syslog message'
8.'UPDATE messages set message = DEFAULT where id = 100;'
9.'UPDATE messages set message = DEFAULT where id = 100 RETURNING *;' - returns ALL columns
Note:RETURNING - is postgresql specific
Note:It is equivalent to running a post-UPDATE SELECT query
10.'UPDATE messages set id = id+1 where id = '102'
11.'UPDATE messages set id = id+1 where id = '100' RETURNING *;' - ERROR because of duplicate
12.'UPDATE messages set id = id+1 where id = '101' RETURNING *;' - ERROR because of foreign key constraint
13.'UPDATE messages set date = '2014-04-15' RETURNING *;'
13.'UPDATE messages set date = 'now' RETURNING *;'
### DELETE ###
Features:
1.Removed entire records based on criteri(on|a)
2.Does NOT remove individual columns
3.Requires name of table,and preferably criteri(a|on) (WHERE) clause
4.Delete deletes recursively,use‘ONLY to avoid deleting child tables
5.Returns numbers of (count) records deleted
Examines:
1.'DELETE from messages WHERE = 103' - removes a single record IF EXISTS
2.'DELETE FROM messages WHERE date = '2010-10=18' and enabled = 't' and id > 50;' removes records with (id) >= 50;
Note:Fails because of foreign key constrant
3.'DELETE FROM messages WHERE date = '2010-10=18' and enabled = 't' and id > 50 and id < 101;' removes records with (id) >= 50 and (id) < 101;
a.'SELECT count(*),min(id),max(id) FROM messages;'
4.'DELETE FROM messages WHERE id > 50 and id < 101 RETURNING *;' removes records with (id) >= 50 and (id) < 101;
5.'DELETE FROM messages WHERE enabled = '1' RETURNING *;' - Boolean
Note:Foreign key constraint prohibits the entire transaction
6.'DELETE FROM message WHERE enabled = '1' AND id >= 30 AND id !=100 RETURNING *;
7.'DELETE FROM messages;' - deletes ALL rows and rows of sub-tables recursively
a.'DROP CONSTRAINT IF EXISTS messagescategories_id_fkey;' - remove constraint from dependent table
8.Re-constitute 'messages' table to include auto-generation 'SERIAL' type on (id) column and re-populate with date
a.'ALTER TABLE message drop id CASCADE;' - drops column with cascade
b.'ALTER TABLE message ADD id serial;' - create auto-sequence generator
c.'for i in`seq 10000`;do echo `date +%F`;done > 10k.txt' - generate 10k records
d.'COPY messages (date) from $PATH_TO/10k.txt'
e.'SELECT count(*) FROM messages;'
### INDEXES ###
Features:
1.Speed data retrieval & writes(INSERT,UPDATE,DELETE)
2.Indexes references data locations explicitly,for indexed columns,consequent reduces data retrieval time
3.Without indices,SQL performs sequential table scans in search of data
4.Create on columns,that are frequently queried and | or JOINed
5.Caveat:During creation,ONLY reads are permitted to table being indexed
6.Max 32 columns per index - multicolumn
7.PostgreSQL auto-maintains indexed
Tasks:
1.'EXPLAIN select * from messages'- explains (does not execute) plan to execute query
2.'EXPLAIN select * from messages where id = 4000'
3.Drop & Recreate messages table
a.'DROP TABLE messages'
b.'CREATE TABLE messages(date date,id SERIAL);'
b.'ALTER TABLE messages add primary key (id);' - generate btree index on column(id)
4.Create an index on a column
a.'ALTER TABLE messages ADD messageid numeric NOT NULL DEFAULT 0;' =
b.'CREATE INDEX messages_id ON messages(messageid);'
c. 'explain ANALYZE select messageid,date from messages where msssageid = 5990' - analyze causes query to execute,suppressing the output, returning useful statistics
5.Enumerate Indexes' info
a.'\di[S+]' - enumerates ALL indices within public schema
6.Drop Index
a.'DROP INDEX indexname'
### Built-in functions ###
Features:
1.Manipulate data in a variety of ways
Tasks:
1.Cover Math Functions
a.'select abs(-5)'
b.'select sqrt(25)' || SELECT sqrt(id) FROM messages
c.'select cbrt(125)'
d.select ceil(95.4) - return next highest integer
e.select floor(95.4) - return next lowest integer
f.select div(15,5) - performs division - returns least integer
g.'select log(1000)'
h.'select power(4,3)' - raises 4 to the 3rd power
i.'select random()' - returns random value between 0 and 1 - 10^-15
j.'select round(22.5)' - rounds down or up,same as 'rand()' in mysql
k.'select trunc(95.456,1)' - useful for normalizing floating point - same as 'truncate()' in mysql
l.'select cos(0;)' - returns 1,other trig functions are available
2.Cover useful String Functions
a.'select bit_length('test bunny');' - returns 80 bits,same as mysql 'bit_length'
a1.alter table message add message text not null default 'syslog message';
b.'select bit_length(message) from messages;' - 112-bits
c.'select char_length('test bunny')' - 10 chars
d.'select char_length(message),message from message' - 14 chars
e.'select lower('Test Bunny');' - normalizes output to be lower case
f.'select lower(message) from messages;' - normalizes output to be lower case from column
g.'select upper(message) from message;' - normalizes output to be upper case from column
h.'select initcap(message) from message;' - Apply CAPS to first letter of each word
i.'select overlay('test' placing 'xx' from 2);' - string replacement
i.'select message,overlay(message placing 'xx' from 2) from message;' - applies to query
k.'select trim( '' from ' linuxcbt')'; - trims leading & trailing,not between
l.'select substring('syslog' from 4)'
m.'select message,substring(message from 4) from messages;' - applies to table
n.'select split_part('syslog message',' ',2);' - returns 2nd string using space delimiter
o.'select initcap(split_part(message,' ',2)) from messages where id < 10;' nested functions
### Model:/var/log/messages ###
Features:
1.Challenge of replicating a flat-file structure
Tasks:
1.Examin and model:/var/log/messages
a.Need:'mID,mCatID,mTime(timestamp) transformation,,mHost,mFacility,mMessage'
2.Generate 'CREATE' statement
a.'CREATE TABLE messages(
mid BIGSERIAL PRIMARY KEY,
mcatid smallint NOT NULL DEFAULT 1,
mtime timestamp NOT NULL DEFAULT now(),
mhost TEXT NOT NULL DEFAULT 'UNKNOWN HOST',
mfacility TEXT NULL DEFAULT NULL,
mmessage text NOT NULL DEFAULT 'NO MESSAGE'
);'
3.Test 'INSERT' statements for sample record
a.'INSERT INTO messages(mtime,mhost,mfacility,mmessage) values('Oct 17 07:53:42 2010','linuxcbtt1','Kernel;','[061929.262518] device eth0 left promiscuous mode');'
4.Write Perl script t parse:/var/log/messages and translations data to suite PostgreSQL
a.Also include logic to extract:':' from the end of the facility name;,i.e,'Kernel:' should become 'Kernel'
// TODO
### Integration of Perl with PostgreSQL ###
Requires:
1.'libpg-perl' - PostgreSQL module for Perl
Features:
1.DBMS connectivity to Perl application/scripts
Tasks:
1.Install 'libpg-perl'
a.'aptitude install libpg-perl'
a1.'aptitude install perl-doc'
a2.'perl-doc Pg' for help
// TODO
### GRANT ###
Features:
1.Assigns Privileges
a.SELECT - columns of tables
b.INSERT - columns of tables
c.UPDATE
d.DELETE - row-based
e.CRATE
f.CONNECT
g.EXECUTE
h.TRIGGER
i.USAGE
j.TEMPORARY
k.TRUNCATE
l.REFERENCES
2.Objects are owned by creators:owners/super-users
a.non-super users have NO access to them
3.Use:'\dp' to reveal GRANTS
Tasks:
1.Create new user 'linuxcbt3' and try to SELECT data from existing tables
'CREATE ROLE linuxcbt3 LOGIN password 'abc123''
Note:UPDATE & DELETE privileges require SELECT for criteri[a|on] applications
2.Attempt to query tables owned by other users
a.SELECT * FROM messages LIMIT 10; - Fails
3.Remedy scenario to allow user:'linuxcbt3' to SELECT data from 'messages' table
a.'GRANT SELECT(mid,mcatid) ON messages; TO linuxcbt3;' - Column-level privileges
b.'GRANT SELECT ON messages; TO linuxcbt3;' - Table-level privileges - supercedes column restrictions
4.Attempts to INSERT new record into messages table
a.'INSERT INTO messages (mtime,mhost,mfacility,mmessage) values('Oct 21 10:48:46 2010','LinuxcbtIdl','test','TESTING INSERT PRIVILEGES');'
date "+%b %d %T %Y"
b.'GRANT INSERT ON messages; TO linuxcbt3;' - grants INSERT on ALL columns
c.'GRANT USAGE ON messages_mid_seq to linuxcbt3' - grants USAGE on sequences
Note:If using sequences,grant USAGE on sequences to user
5.Attempt to UPDATE current records on:'messages' - table
a.'UPDATE messages SET mfacility = 'test2' where mid='1'';
b.'GRANT UPDATE ON messages_mid_seq to linuxcbt3' - grants UPDATE on sequences
Note:UPDATE privileges allows user to update ANY records in the table
6.Attempt to DELETE current recordings from 'messages' table
a.'DELETE FROM messages where mid='1'';
6.Grant ALL privileges on : 'messages' table
a.'GRANT ALL ON messages TO linuxcbt3'
### REVOKE ###
Features:
1.Converse of GRANT
2.unassigns privileges
3.Sample permission set:
linuxcbt3=arwdDxt/postgres
a = INSERT/Append
r = Read/SELECT
w = Write/UPDATE
d = DELETE
D = TRUNCATE
x = References
t = triggers
/postgres = permission delegator/issuer
Tasks:
1.'REVOKE all on messages FROM linuxcbt3' - removes all privileges from the user
2.'GRANT all on messages,messages_mid_seq FROM linuxcbt3' - reinstate privileges
3.'REVOKE all on messages,messages_mid_seq FROM linuxcbt3' - removes all privileges from the user for both objects
4.Grant & Revoke INSERT | UPDATE | DELETE
a.'GRANT INSERT ON messages to linuxcbt3;'
b.'GRANT INSERT ON messages_mid_seq to linuxcbt3;' - sequence generator access
Note.INSERT may by granted independently of SELECT,unlike 'UPDATE & DELETE'
c.'REVOKE INSERT ON messages FROM linuxcbt3;REVOKE USAGE ON messages_mid_seq FROM linuxcbt3; - Tow revocations:INSERT & USAGE
d.'GRANT UPDATE ON messages TO linuxcbt'
e.'UPDATE messages SET mfacility = "TEST2" WHERE mid='204381'; - fails because the user has NO SELECT privilege to execute the criteria in the UPDATE query
f.'GRANT SELECT on messages TO linuxcbt3;'
g.'REVOKE ALL on messages,messages_mid_seq from linuxcbt3'
h.'GRANT DELETE ON messages where mi = 20437;'
i.'GRANT SELECT on messages TO linuxcbt3;'
Test WITH GRANT OPTION
a.'GRANT ALL ON messages TO linuxcbt3 with grant option' - allows user:'linuxcbt3' to GRANT ALL privileges on the object:'messages' to other users
b.'CREATE ROLE linuxcbt4 login password "123456"'
c.'GRANT SELECT ON MESSAGES TO linuxcbt4' - run as linuxcbt3
d.'GRANT INSERT ON messages TO linuxcbt4' - run as linuxcbt3
e.'GRANT UPDATE ON messages TO linuxcbt4' - run as linuxcbt3
f.'GRANT USAGE ON messages_id_seq TO linuxcbt4' - run as linuxcbt3
Attempt to revoke privileges from linuxcbt3 as user:'linuxcbt2' OR super user
a.'REVOKE ALL ON messages from linuxxcbt3'; - fails due to dependency
a.'REVOKE ALL ON messages from linuxxcbt3 cascade'; - Recursive
Note:If a permission/privileges dependency exists,'CASCADE' option with 'REVOKE' command to descend the permissions/privileges hierarchy
Test direct removal of privileges from top level to bottom
a.GRANT ALL ON messages TO linuxcbt3 with grant option - run as linuxcbt2 or SUPER user
b.GRANT SELECT,DELETE ON messages TO linuxcbt4 - run as linuxcbt3
c.'REVOKE ALL ON messages from linuxcbt4' - run as linuxcbt 2 or SUPER USER
Note:- FAILs due to grant hierarchy
d.'REVOKE ALL ON messages FROM linuxcbt3 cascade' - Recursive
### DB Backup ###
Features:
1.Individual table,DB,or full DBMS backup
2.'pg_dump' & 'pg_dumpall'
3.Operate on running DB
4.Export SQL script or Archive (pg_dump only) (used with pg_restore) formats
5.SQL script Designed for full-replay with 'psql' utility
6.Archive:Designed to allow selective and/or reordered restores
a.'-Fp) - Default - Plain SQL script output - uncompressed
b.'-Fc) - Custom,auto-compressed form - 'pg_restore'
c.'-Ft) - Tar form - not compressed - restrictions on reorder - works with 'tar' & 'pg_restore'
Tasks:
1.Backup 'postgres' DB - Plain (-Fp) Format
a.'pg_dump postgres' - dumps for STDOUT
b.'pg_dump -v -f DB_Backup_postgres postgres' - generates plain text SQL script file containing data
Note:uses 'COPY' to reconstruct data as opposed to 'INSERT'
c.'pg_dump -v postgres > DB_BACKUP_POSTGRES2' - perform as above
d.'pg_dump -v -s -f DB_Backup_schema postgres ' - Dump schemas only
e.'pg_dump -v -s -t messages -f DB_Backup_schema postgres ' - Dump schemas only
f.'pg_dump -v -t "messages*" -f DB_BACKUP_ALL_MESSAGES_TABLES.only messages' - Archives ALL items in 'postgres' DB beginning with 'messages'
2.Backup 'postgres' DB - Compressed (-Fc) Format
a.'pg_dump -v -Fc -f DB_Backup_postgres_compressed' - creates custom compressd file to be used with:'pg_restore'
3.Backup 'postgres' DB - Tar(-Ft) Format
a.'pg_dump -Ft -f DB_BACKUP.postgres.tar postgres' - creates a tarbal of DB
4.Use 'pg_dumpall' to archive the entire DBMs
a.'pg_dumpall -v -f DB_BACKUP_ALL' -
b.Create auth file in $HOME to obviate the need to authenticate to each DB
c.'echo "localhost:*:*:linuxcbt2:abc123">~/.pgpass && chmod 600 ~/.pgpass'
d.Re-run 'pg_dumpall' as user:'linuxcbt2' - defined in $HOME:.pgpass
e.'pg_dumpall -v -U linuxcbt2 -f DB_BACKUP_ALL'
### DB Restore ###
Features:
1.Tow tools:'pgsql' && 'pg_restore'
Tasks:
1.Use 'pg_restore' to restore table,DB,etc.
a.'Drop table messages2,messages_categoris'
b.'pg_restore -v -d postgres DB_BACKUP_POSTGRES_COMPRESSED' - full restoration using 'compressed' file
c.'pg_restore -v -d postgres DB_BACKUP_POSTGRES_tar' - full restoration using using file
Note:use:'pg_restore' -l backup_file to enumerate items for selective/reordered restoration
2.Backup 'linuxcbt2' DB and restore
a.'p_dump -C -v -Fc -f DB_backup.linuxcbt2.DB linuxcbt2'
b.'drop database linuxcbt2'
c.'pg_restore -C -v -d postgres DB_backup.linuxcbt2.DB' - restores DB'linuxcbt2
d.'pg_dump -v -C -f DB_backup.linuxcbt2.DB.sql'
d.'DROP DATABASE linuxcbt2;'
e.'psql -f DB_backup.linuxcbt2.DB.sql' - Fails because source file is not SQL text
3.Restore specific tables using:'pg_restore'
a.'Drop table messages'
b.'pg_restore -v -d postgres -t messages DB_BACKUP_POSTGRES_COMPRESSED' - restores 1 table
c.'pg_restore -v -d postgres -t messages DB_BACKUP_POSTGRES.tar' - restores 1 table
4.Use 'psql' to restore selected backup items (tables,sequences,etc)
a.'DROP TABLE messages,messages_categoris'
b.'psql -f DB_BACKUP_ALL_MESSAGES_TABLES'
c.'DROP TABLE messages,messages,messages_categoris messages_alerts'
d.'psql -f DB_BACKUP_ALL_MESSAGES_TABLES.only'
### Windows DB Restoration ###
Tasks:
1.Explore Windows PostgreSQL environment
a.'psql -h 192.168.1.107'
2.Restore data to WINDOWS instance
a.'psql -h 192.168.1.107 -f DB_BACKUP.all' - restores ALL DB to remote host
3.Wreak Havoc on remote DB and Restore
a.'DROP TABLE messages,messages2,messagesandcategories;'
b.'pg_restore -v -d postgres -t messages -h 192.168.1.107 DB_BACKUP_POSTGRES_COMPRESSED' - restores 1 table
c.'pg_restore -v -d postgres -t messages2 -h 192.168.1.107 DB_BACKUP_POSTGRES_COMPRESSED' - restores 1 table
d.'pg_restore -v -d postgres -t messages_categoris -h 192.168.1.107 DB_BACKUP_POSTGRES_COMPRESSED' - restores 1 table
e.'DROP table messages,messages_alerts,message-categorize'
f.'psql -h 192.168.1.107 0U postgres -f DB_BACKUP_ALL_MESSAGES.tables.only'
Note:Ensure that remote system's HBA conf
nmap -v -p 3389 192.168.1.107
### Installation on RedHat Enterprise Linux ###
Features:
1.PostgreSQL support
2.Ability to use the same binary used on the other distributions
Tasks:
1.Copied binary from remote system to local RedHat system
2.Executed it
3.Confirm availability:'ps -ef| grep postgres'
4.Connect and confirm default environment
5.Update -Linuxcbt $PATH & &PGUSER var
6.source if necessary for active TTY
a'. ~/.bash_profile
7.Mirror contents of remote server
a.Get 'dB_Backup.ALL' - use 'sftp'
b.Populate RedHat instance of PostgreSQL with data from debian box
b1.'psql -f DB_Backup.ALL'
8.Remove tables and restore across the wire using 'psql'
a.Update HPA conf to allow network connectivity
b.Restart 'posgres' to effect 'pg_hba.conf' change
c.Restore items using 'psql' from remote hosts
c1.'psql -U postgres -f DB_BACKUP_ALL postgres -h 192.168.1.107' - replays script on remote Redhat Enterprise box
\p
\r
### SSH Tunnels ###
Features:
1.Security communications from point to point
2.Encryption services
3.wrapper of communications
4.Traffic is protected in transit, NOT at the endpoints
5.Defaults to protecting loopback adapter address(es)
Tasks:
1.sniff PostgreSQL communications using :'tcpdump'
a.'tcpdump -v -i lo tcp port 5432'
a1.'tcpdump -v -f postgresql.dump.1 -i lo tcp port 5432'
b.'psql -h localhost'
yum install wireshark
yum install wireshark-gnome
c.'wireshark postgresql.dump.1' - reviews sensitive data
2.Apply SSH Tunnels - from Linux
a.'ssh -L 5433:192.168.75.20:5432 192.168.75.20' - creates a tunnel between linuxcbtbuild1(.101) -> linuxcbtserv1(.20)
b.'netstat -ntl | grep 5433' - confirms existence of tunnel
c.'psql -h localhost -p 5433' - initiate connection
d.'ssh -L 5433:192.168.75.101:5432 192.168.75.101' - creates a tunnel between linuxcbtbuild1(.101) -> linuxcbtserv1(.20)
e.'psql -h 127.0.0.1 -p 5433' - initiate connection
3.Apple SSH Tunnel from Windows
a.Ensure that PUTTY equivalent SSH client is installed
netstat -anp tcp
b.Setup session to forward TCP:5433 & TCP-5444 to TCP:5432 on Debian and RedHat boxe
c.Text 'psql' client access across the tunnel from windows
Note:This will not work with windows as the target SSH Server sans Cygwin or compatible SSH service
### SSH Connections ###
Features:
1.True end-to-end encryption protection - 100%
2.Listen to samw,clear-text port '5432'
3.Auto-negotiate with client connection type unless server config(pg_hba.conf) enforces type
4.Supports server(default) & client certification
5.'openssl version -d' - reveals config directory for OpenSSL
Requires:
1.Server keypair:'server.crt' (public) & 'server.key'(private) in DATA directory
2.'server.key' MUST be flagged 600
3.Optionally,'root.crt' & 'root.crl'
4.'ssl=on' - enabled via 'postgresql.conf'
Tasks:
1.Generate Server Keypair
a.'openssl req -new -text -out server.req' - Generates a request
b.'openssl rsa -in privkey.pem -out server.key' - removes passphrase
grep 1024 '/etc/ssl/openssl.cnf'
c.'rm privkey.pem' - because we now have a RSA version in 'server.key'
d.'openssl req -x509 -in server.req -text -key server.key -out server.crt' - generates self-signed certificate(.crt) file
e.'chown postgres server.key && chmod 600 server.key'
2.Configure PostgreSQL
a.'ssl=on' - postgresql.conf
b.Restart services
/etc/init.d/postgresql-9.3 restart
Note:Test inability to restart postgres when 'server.key' is not readable
3.Test connectivity
a.'psql -U postgres -h localhost' - SSL was used because TCP/IP was used
Note:SSL is not used when using unix Domain sockets
b.'psql -h 192.168.1.115' - SSL was used.
c.'psql' - SSL was not used due to Unix Domain Sockets usage
d.Connect to RedHat host and test connectivity
4.sniff SSL session with TCPDump
a.'tcpdump -w postgresql.dump.ssl.1 -v tcp port 5432'
5.Test from windows
a.'psql -h 192.168.1.1153' - It works with SSL