Search This Blog

Showing posts with label Postgre SQL. Show all posts
Showing posts with label Postgre SQL. Show all posts

2018/01/06

Postgres Cross-Server/Cross-Database Querying


To demonstrate cross database quering we will create two databases "PlaygroundLive" & "PlaygroundStaging".Both these database can be on same server or different postgresql server.In current case they are on same server.

On both databases create below table

create table TestTable
(
  id int primary key,
  name varchar(500)
)

on staging add three rows

insert into TestTable(id,name) values(1,'sagar'),(2,'sangram'),(3,'sachin');

on live add two rows

insert into TestTable(id,name) values(1,'sagar'),(2,'sangram');

Now create extensions called dblink & foreign data wrapper

CREATE EXTENSION postgres_fdw;

CREATE EXTENSION dblink;

Check if extension got created or not by checking newly added procedures/functions related to new extension db_link.

SELECT pg_namespace.nspname, pg_proc.proname FROM pg_proc, pg_namespace WHERE pg_proc.pronamespace=pg_namespace.oid AND pg_proc.proname LIKE '%dblink%';

On our staging Database connect to live database,check if connectivity succeed or not,query below give OK on success

SELECT dblink_connect('host=localhost user=postgres password=sangram dbname=PlaygroundLive');

CREATE FOREIGN DATA WRAPPER dbrnd VALIDATOR postgresql_fdw_validator;

Creating server connection:
Here we have both database on same postgres sql server yet we have to follow same process we need to provide server details & server IP & database to connect for.
CREATE SERVER demodbrnd FOREIGN DATA WRAPPER postgres_fdw OPTIONS (hostaddr '127.0.0.1', dbname 'PlaygroundLive');

you can check if server creation succeeded by running below command

SELECT * FROM pg_foreign_server; 
 
CREATE USER MAPPING FOR postgres SERVER demodbrnd OPTIONS (user 'postgres',
password 'sangram');
 
here postgres user on local server is mapped to postgres user on remote server

Connect to Server:

SELECT dblink_connect('demodbrnd');

GRANT USAGE ON FOREIGN SERVER demodbrnd TO postgres;

Running Cross Database Queries:
now from staging database we can run queries to live database


1) Select:
SELECT * FROM public.dblink('demodbrnd','select id,name from public.TestTable') AS DATA(id INTEGER,name CHARACTER VARYING);

it will have two records in output.

2) INSERT:
SELECT dblink_connect('demodbrnd');
select * from dblink_exec('INSERT INTO public.TestTable values(3,''sachin'')')

One more record will get added to TestTable on Live database.Now TestTable on both database are identical.

3) SELECT INTO:
SELECT * INTO temp_TestTable
FROM public.dblink('demodbrnd','select id,name from public.TestTable') 
AS DATA(id INTEGER,name CHARACTER VARYING);

Here temp_TestTable is created on Staging database will get all three 
records from TestTable in Live database.

4) Parametrized SELECT:
--using parametrized query with prepare
PREPARE migrate_data (integer) AS
INSERT INTO temp_TestTable
SELECT id, name
FROM dblink('demodbrnd', 'select id,name from public.TestTable')
AS t(id integer, name varchar)
WHERE id > $1;

EXECUTE migrate_data(2);

execution of prepare statement will add one more record to temp_
TestTable so there will 4 records now.

--check one more record got inserted
select * from temp_TestTable

TO delete Foreign data wrapper follow following sequence of drop 
queries as object stacked one above other.
Create Schema called “ForeignSchema” in staging database

Now import all tables from remote database public schema into staging 
atabase “ForeignSchema” as follows.

IMPORT FOREIGN SCHEMA public
FROM SERVER demodbrnd INTO "ForeignSchema";

Now remove 1st record from staging database table so that we are sure that 
both table differ as follows

delete from public.TestTable where id=1;

you can check that he our “TestTable” from live server has been imported 
into local database in schmema “ ForeignSchema” by running command 

select * from "ForeignSchema".TestTable;

Now we can run insert into statement also:
insert into "ForeignSchema".TestTable
select id+3,name from public.TestTable;

DROPPING Foreign Data WRAPPER: 
drop USER MAPPING FOR postgres SERVER demodbrnd ;
drop SERVER demodbrnd;
drop FOREIGN DATA WRAPPER dbrnd;

Notes on Integrating Postgres Entity Framework with DOT NET Core 2.0

First create a Core 2.0 app say dotnetmvcapp using

dot net new dotnetmvcapp

now we will follow online tutorial

https://www.youtube.com/watch?v=md20lQut9EE

I am just listing my files & its final content & Few important points.

dotnetmvcapp.csproj


<Project Sdk="Microsoft.NET.Sdk.Web">

<PropertyGroup>
<TargetFramework>netcoreapp2.0</TargetFramework>
</PropertyGroup>

<ItemGroup>
<PackageReference Include="Microsoft.AspNetCore.All" Version="2.0.0" />
<PackageReference Include="Npgsql.EntityFrameworkCore.PostgreSql" Version="2.0.0" />
<PackageReference Include="Microsoft.EntityFrameworkCore.Tools" Version="2.0.0" />
<PackageReference Include="Microsoft.EntityFrameworkCore.Design" Version="2.0.0" />
</ItemGroup>

<ItemGroup>
<DotNetCliToolReference Include="Microsoft.VisualStudio.Web.CodeGeneration.Tools" Version="2.0.0" />
<DotNetCliToolReference Include="Microsoft.EntityFrameworkCore.Tools.DotNet" Version="2.0.0" />
</ItemGroup>

</Project>


run restore packages

dotnet restore


Startup.cs
add below using directives first

public void ConfigureServices(IServiceCollection services)
{
services.AddMvc();
services.AddEntityFrameworkNpgsql()
.AddDbContext<dotnetmvcappContext>(opt
=> opt.UseNpgsql(Configuration.GetConnectionString("MyWebAppConnection")));
}
Here You Need to add Below using get read of error message at “ UseNpgsql” &
dotnetmvcappContext”.

using dotnetmvcapp.Models;
using Microsoft.EntityFrameworkCore;




Create Model:

a) Create Models folder

b) Inside Models folder add

dotnetmvcappContext.cs

Users.cs


dotnetmvcappContext.cs

using Microsoft.EntityFrameworkCore;

namespace dotnetmvcapp.Models
{
public class dotnetmvcappContext : DbContext
{
public dotnetmvcappContext(DbContextOptions<dotnetmvcappContext> options) : base(options)
{

}
public DbSet<User> Users {get;set;}
}
}


Users.cs
namespace dotnetmvcapp.Models
{
public class User
{
public int Id{get;set;}
public string Name{get;set;}

public string Email{get;set;}
}
}

appsettings.json

{
"ConnectionStrings":{
"MyWebAppConnection":"User ID= xdba;Password=sangram;Server=localhost;Port=5432;Database=xplay;Integrated Security=true;Pooling = true"
},
"Logging": {
"IncludeScopes": false,
"LogLevel": {
"Default": "Warning"
}
}
}

Run Migration:
dotnet ef migrations add InitialMigration

It initializes our 'dotnetmvcappContext'

this Migration can be undone using
ef migrations remove

Apply Migration to Db:

dotnet ef database update

this will create a Users Table inside postgres which conforms to


CREATE TABLE public."Users"
(
"Id" integer NOT NULL DEFAULT nextval('"Users_Id_seq"'::regclass),
"Email" text COLLATE pg_catalog."default",
"Name" text COLLATE pg_catalog."default",
CONSTRAINT "PK_Users" PRIMARY KEY ("Id")
)

you can check postgre sql and confirm the User Table has been created.


2017/07/01

Insert ,Update & Delete using POSTGREST REST API


Continuing further from where we left previously today we will explore how we can do insertion ,updation & deletion of records in a table using REST API exposed by "POSTGREST".

Ground Up:
    Lets start postgrest first as below
 
            ./postgrest 'postgres://mypgusername:mypassword@localhost:5432/mypgdb' -a mypgusername -p 2020

       I am using Pgadmin tool to connect to postgres sql server with user other than "mypgusername" but running postgrest through "mypgusername" user
    we might to grant access to postgres user "mypgusername" on SQL operations time to time.This will also help to verify REST API will not allow unauthorized access.

    Now we will create a dummy table

            create table t_name
            (
                 id integer,
                 fname varchar,
                 mname varchar,
                 lname varchar
            )

    for testing purpose we will add a row to the table.

        insert into t_name values(1,'sandip','suresh','raju')

    we will open  "Postman" a Google Chrome extension and call below API.Being new table we need to grant select right to our postgrest user.

        grant select on t_name to mypgusername;

        API End Point: http://localhost:2020/t_name
        Method: GET
        API Response:
            [
                {
                    "id": 1,
                    "fname": "sandip",
                    "mname": "suresh",
                    "lname": "raju"
                }
            ]

INSERT through REST API:
    First grant our postgres user insert right through which we are running postgrest also give update right we may need it latter.

        grant insert on t_name to mypgusername;
        grant update on t_name to mypgusername;

    Postgrest support INSERT through REST out of box.

        API End Point:http://localhost:2020/t_name   
        Payload:
            {
                "id":"2",
                "fname":"shiv",
                "mname":"shankar",
                "lname":"sharma"
            }
        HTTP Method :POST
        Request Headers:
            Content-Type:application/json
        Response:
            You should get HTTP status code 201 saying record is created.
            Any Http status code between 200 to 299 stand for successful completion of http request.

    Note:Now to verify if record inserted we will call previous API.

UPDATE using REST API:
        API End Point: http://localhost:2020/t_name?id=eq.1
        HTTP Method: PATCH
        Payload:
            {
                "fname": "sandeeep",
                "mname": "sureshhh"
            }
        Request Header:Content-Type: application/json
        Response:Http Response code 204 (The server has successfully fulfilled the request and that there is no additional content to send in the response payload body)

    Here in our API end point we narrowed down records to update to single one by using horizontal filtering with "id=eq.1".

        Note: Again one can verify if record updated     properly or not by calling first api list here.

Deleting a record using REST API:
    we may need to give delete right on our table just like we have given it for insert & update

        grant delete on t_name to mypgusername;
   
        API End Point:http://localhost:2020/t_name?id=eq.1&fname=eq.sandeeep
        HTTP Method: DELETE
        Response:Http status code 204.

        Here again we used horizontal filtering to select single row,if no filtering was used  REST API deletes every record from table "t_name".
   
   
   



2017/01/29

postgresql Cluster

List All Cluster:

    root@debian:~# pg_lsclusters
    Ver Cluster Port Status Owner    Data directory               Log file
    9.6 main    5432 online postgres /var/lib/postgresql/9.6/main     /var/log/postgresql/postgresql-9.6-main.log

Find Status of given instance:


    root@debian:~# pg_ctlcluster 9.6 main  status
    pg_ctl: server is running (PID: 6062)
    /usr/lib/postgresql/9.6/bin/postgres "-D" "/var/lib/postgresql/9.6/main" "-c" "config_file=/etc/postgresql/9.6/main/postgresql.conf"

Kill Process:

    sudo kill -SIGHUP  6062

Change Status of instance – start/stop:

    pg_ctlcluster 9.6 main  stop
    pg_ctlcluster 9.6 main  start
    pg_ctlcluster 9.6 main  restart

View Cluster Configuration:

    root@debian:~# pg_ctlcluster 9.6 main  status
    pg_ctl: server is running (PID: 6062)
    /usr/lib/postgresql/9.6/bin/postgres "-D" "/var/lib/postgresql/9.6/main"     "-c" "config_file=/etc/postgresql/9.6/main/postgresql.conf"
    root@debian:~# ^C
    root@debian:~# pg_conftool 9.6 main show all
    cluster_name = '9.6/main'
    data_directory = '/var/lib/postgresql/9.6/main'
    datestyle = 'iso, mdy'
    default_text_search_config = pg_catalog.english
    dynamic_shared_memory_type = posix
    external_pid_file = '/var/run/postgresql/9.6-main.pid'
    hba_file = '/etc/postgresql/9.6/main/pg_hba.conf'
    ident_file = '/etc/postgresql/9.6/main/pg_ident.conf'
    lc_messages = en_IN
    lc_monetary = en_IN
    lc_numeric = en_IN
    lc_time = en_IN
    log_line_prefix = '%t [%p-%l] %q%u@%d '
    log_timezone = localtime
    max_connections = 100
    port = 5432
    shared_buffers = 128MB
    ssl = true
    ssl_cert_file = '/etc/ssl/certs/ssl-cert-snakeoil.pem'
    ssl_key_file = '/etc/ssl/private/ssl-cert-snakeoil.key'
    stats_temp_directory = '/var/run/postgresql/9.6-main.pg_stat_tmp'
    timezone = localtime
    unix_socket_directories = '/var/run/postgresql'

Create New Cluster:
     pg_createcluster 9.6 new_cluster

DROP Cluster:
     pg_dropcluster --stop 9.6 main

Update Cluster:
     pg_upgradecluster 9.5 main

    it upgrades to next version

Reload Configuration:
      pg_ctlcluster 9.6 main reload

Restart Postgres Service:
    root@debian:~# systemctl reload postgresql
CHECK if Postgres is Listensing :
    root@debian:~# pg_isready
    /var/run/postgresql:5432 - accepting connections

2016/08/17

Table valued function parameter in postgres SQL

   In Postgres -SQL server whenever we create a new table it automatically creates a composite Type that corresponds with structure of table. we can use this type to create a variable ,it can hold a single row of corresponding table.
In situation where we need to pass whole result-set as parameter to Postgres function there seems to be no native support in postgres unlike Microsoft SQL server so converting a existing MSSQL Procedure to Postgres function become quite a headache.
we can create an equivalent representation of result set in XML or JSON and pass it to function; inside function we can deserialize this into result-set for purpose of further operation on result-set that way we can compensate absence of Table Valued Parameters in Postgres SQL .
 Along with JSON & XML we can also create an array of objects from result-set to function as parameter.
Here I will demonstrate this approach with inbuilt Array functionality of Postgres.

Lets create a Table as follows
CREATE TABLE public."Projects"
(
   id integer NOT NULL,
  "createdAt" timestamp with time zone,
  "updatedAt" timestamp with time zone,
  name character varying(255) NOT NULL,
  "UserId" integer
);

For purpose of demonstration add few records into “Projects” table with two different user ids say 1 & 2.

Now let’s create a one more table that will used for passing result-set to our function, function is trivial one it just give count of rows user wise

CREATE table projectlist
(
                list int[],
                user_id int
);

Now we will use ROW construct to serialize our result-set and save that as a single row in projectlist table as follows

insert into projectlist(user_id,list)
select  1 "UserId",ARRAY(select id from "Projects" where "UserId" =1) ;

insert into projectlist(user_id,list)
select  2 "UserId",ARRAY(select id from "Projects" where "UserId" =2) ;

The Array function will convert values passed to it as an array e.g if it got 3 records as input  14,15,16 then resultant value will {14,15,16}. unnest is another Postgres function that do exactly reverse of it that we will use for de-serialization.
Below is my function that will read the record set

CREATE OR REPLACE FUNCTION public.demo_func(
    IN  list1 projectlist,
    IN  list2 projectlist)
  RETURNS TABLE(user_id int, count bigint) AS
$BODY$
BEGIN
               
        RETURN QUERY
                WITH  list1_cte AS (
                              
                ),
                list2_cte  AS (
                              
                )
                select  list1_cte.user_id,count(*) as count from list1_cte group by list1_cte.user_id
                union
                select  list2_cte.user_id,count(*) as count from list2_cte group by list2_cte.user_id;

END
$BODY$
LANGUAGE plpgsql ;

Now we need to join all dots and call our function by passing serialized result set.We create two table type variables list1 & list2 and give them serialized data that we had already inserted into our table created for same purpose.
   After that we will call our function that will return some records that I am looping using cursor.

DO
$$
DECLARE list1 projectlist;
DECLARE list2 projectlist;
DECLARE cursor_op_record RECORD;
DECLARE cursor_var refcursor;
BEGIN
                select p.* into list1  from projectlist p where user_id=1;
                select q.* into list2  from  projectlist q where user_id=2;
   OPEN cursor_var FOR
   select user_id,count from public.demo_func(list1,list2);

   RAISE INFO 'user_id, count';
   LOOP
                FETCH cursor_var INTO cursor_op_record;
                EXIT WHEN NOT FOUND;

        RAISE INFO '%,%', cursor_op_record.user_id, cursor_op_record.count;
   END LOOP;
END$$;

Here  our table projectlist has list column which is just an int[] but we can also replace it with array of composite type object and pass a single tuple.

2016/08/08

Some useful queries for Postgres beginners:


    After Mysql ,Now Postgres SQL has been gaining popularity at multiple level in software development.It can be freely downloaded from internet.Postgres Sql server comes with command line utilities to query,backup , restore and what more.For all those spoiled by GUI tools like me ,PgAdmin 3 is right choice.

PgAdmin 3 is GUI tool similar to SQL Server Management Studio for MSSQL server which is freely available. Postgres SQL comes with tones of new features. Its DML syntax is quite similar to Oracle,Mysql along with some own stuff but point to note is postgres is case sensitive means table with name Users' is different from table with name 'users'.

During my exploration of postgres SQL, I collected some useful queries, obliviously from resources on internet.Many of them are from stackoverflow.Thanks to Stackoverflow for being great place for seeking information.

 There are multiple ways to do same thing,neither I claim its the efficient way to do task yet may be useful to beginners to find it in one place.

Here are some queries from my collection

Lists Databases:

SELECT datname FROM pg_database WHERE datistemplate = false;

Lists all tables in the current database:

SELECT * 
FROM information_schema.tables 
WHERE table_type = 'BASE TABLE' 
    AND table_schema = 'public' 
ORDER BY table_type, table_name

Creating a table with Identity /auto increment columns:

CREATE SEQUENCE user_id_seq;
CREATE TABLE user (
    user_id smallint NOT NULL DEFAULT nextval('user_id_seq')
);

ALTER SEQUENCE user_id_seq OWNED BY user.user_id;

All Constraints on Given table:

select * from information_schema.table_constraints where table_name='Projects'

Postgres tables without foreign key

SELECT *
FROM
information_schema.tables
WHERE
table_type = 'BASE TABLE'
AND table_schema = 'public'
and table_name not in (select distinct table_name from information_schema.table_constraints where constraint_type = 'FOREIGN KEY')


Postgres tables with foreign key:

SELECT
tc.constraint_name, tc.table_name, kcu.column_name,
ccu.table_name AS foreign_table_name,
ccu.column_name AS foreign_column_name
FROM
information_schema.table_constraints AS tc
JOIN information_schema.key_column_usage AS kcu
ON tc.constraint_name = kcu.constraint_name
JOIN information_schema.constraint_column_usage AS ccu
ON ccu.constraint_name = tc.constraint_name
WHERE
constraint_type = 'FOREIGN KEY'

Pg-dump backup & restore at command prompt:

Pg-dump is command line utility in postgres used to backup & restore database.

Backup:

pg_dump -U postgres dbname -f dbname_pg_dump_20160715.sql

Restore:

psql -h localhost -d dbname -U postgresuser  dbname < E:\Repo\project\DBScript\dbname_pg_dump_20160715.sql

List foreign key on given table:

SELECT
tc.constraint_name, tc.table_name, kcu.column_name,
ccu.table_name AS foreign_table_name,
ccu.column_name AS foreign_column_name
FROM
information_schema.table_constraints AS tc
JOIN information_schema.key_column_usage AS kcu
ON tc.constraint_name = kcu.constraint_name
JOIN information_schema.constraint_column_usage AS ccu
ON ccu.constraint_name = tc.constraint_name
WHERE constraint_type = 'FOREIGN KEY' AND tc.table_name='Projects';


Table referencing given table:
Below query list all table that reference User table.

SELECT
tc.constraint_name, tc.table_name, kcu.column_name,
ccu.table_name AS foreign_table_name,
ccu.column_name AS foreign_column_name
FROM
information_schema.table_constraints AS tc
JOIN information_schema.key_column_usage AS kcu
ON tc.constraint_name = kcu.constraint_name
JOIN information_schema.constraint_column_usage AS ccu
ON ccu.constraint_name = tc.constraint_name
WHERE constraint_type = 'FOREIGN KEY' AND ccu.table_name='Users';

List functions in given database:
SELECT
distinct routines.routine_name
FROM
information_schema.routines
WHERE
routines.specific_schema=' public'
ORDER BY
routines.routine_name


more detailed query can be

SELECT
routines.routine_name,
parameters.data_type,
parameters.ordinal_position,
routines.specific_schema
FROM
information_schema.routines
JOIN information_schema.parameters ON
routines.specific_name=parameters.specific_name
WHERE
routines.specific_schema='public' AND routine_type='FUNCTION'
ORDER BY
routines.routine_name,
parameters.ordinal_position;

Creating New user:

CREATE USER MyUser WITH PASSWORD 'abcd546fg';
GRANT ALL PRIVILEGES ON DATABASE DbName to MyUser;

REVOKE TRUNCATE ON MyUser.public."Users" FROM MyUser;
REVOKE DELETE ON MyUser.public."Users" FROM MyUser;

GRANT EXECUTE ON  FUNCTION DbName.public.fn_test;



Change Owner of Table,Sequence:

ALTER TABLE DbName.public.my_table;
ALTER SEQUENCE "Tablename_Columnname_seq" OWNER TO MyUser;
ALTER DATABASE DbName OWNER TO MyUser;

Rename User:
ALTER USER MyUser RENAME TO MyNewUser;

Set password:
ALTER USER MyUser WITH PASSWORD 'dljhfdkjs67N'

List all sequence:
SELECT c.relname FROM pg_class c WHERE c.relkind = 'S';


Postgres Service Start, Stop(Window 10 only):
NET STOP postgresql-x64-9.5
NET START postgresql-x64-9.5


Find list of table with given column name:

select table_name from information_schema.columns where column_name = 'id'

Grant Create database access:
 ALTER USER xdba CREATEDB;

Make user super user:
 
ALTER USER xdba WITH SUPERUSER;


Please feel free to correct queries if  needed.