Thursday, April 19, 2018

How to get LastDay of this month

In this tips Postgresql sintax, we will give example How to get LastDay of this month. 



NOW() function is current date with time.

select now()
"2018-04-20 13:51:09.20666+07"

Often when working with dates, we either need to get the first day of the month or the last day of the month.

Getting the first day is easy and can be done with date_trunc.

SELECT date_trunc('MONTH', now())::DATE;
"2018-04-01"

And then,  to get LastDay of this month using this sintax :

SELECT (date_trunc('MONTH', now()) + INTERVAL '1 MONTH - 1 day')::DATE;
"2018-04-30"

Lets see, easy right..?



How to INSERT using ON CONFLICT clause on Postgresql Database - UPSERT sintax

Hi, sometime we need perform multiple insert into table of Posgresql Database. This article show you, How to INSERT using ON CONFLICT clause on Postgresql Database. OK, let's go to the point.




PostgreSQL since version 9.5 has UPSERT syntax, with ON CONFLICT clause. with the following syntax (similar to MySQL).


INSERT INTO the_table (id, column_1, column_2) 
VALUES (1, 'A', 'X'), (2, 'B', 'Y'), (3, 'C', 'Z')
ON CONFLICT (id) DO UPDATE 
  SET column_1 = excluded.column_1, 
      column_2 = excluded.column_2;


ON CONFLICT Clause

The optional ON CONFLICT clause specifies an alternative action to raising a unique violation or exclusion constraint violation error. For each individual row proposed for insertion, either the insertion proceeds, or, if an arbiter constraint or index specified by conflict_target is violated, the alternative conflict_action is taken. ON CONFLICT DO NOTHING simply avoids inserting a row as its alternative action. ON CONFLICT DO UPDATE updates the existing row that conflicts with the row proposed for insertion as its alternative action.

conflict_target can perform unique index inference. When performing inference, it consists of one or more index_column_name columns and/or index_expression expressions, and an optional index_predicate. All table_name unique indexes that, without regard to order, contain exactly the conflict_target-specified columns/expressions are inferred (chosen) as arbiter indexes. If an index_predicate is specified, it must, as a further requirement for inference, satisfy arbiter indexes. Note that this means a non-partial unique index (a unique index without a predicate) will be inferred (and thus used by ON CONFLICT) if such an index satisfying every other criteria is available. If an attempt at inference is unsuccessful, an error is raised.

ON CONFLICT DO UPDATE guarantees an atomic INSERT or UPDATE outcome; provided there is no independent error, one of those two outcomes is guaranteed, even under high concurrency. This is also known as UPSERT — “UPDATE or INSERT”.

conflict_target
Specifies which conflicts ON CONFLICT takes the alternative action on by choosing arbiter indexes. Either performs unique index inference, or names a constraint explicitly. For ON CONFLICT DO NOTHING, it is optional to specify a conflict_target; when omitted, conflicts with all usable constraints (and unique indexes) are handled. For ON CONFLICT DO UPDATE, a conflict_target must be provided.

conflict_action
conflict_action specifies an alternative ON CONFLICT action. It can be either DO NOTHING, or a DO UPDATE clause specifying the exact details of the UPDATE action to be performed in case of a conflict. The SET and WHERE clauses in ON CONFLICT DO UPDATE have access to the existing row using the table's name (or an alias), and to rows proposed for insertion using the special excluded table. SELECT privilege is required on any column in the target table where corresponding excluded columns are read.

Note that the effects of all per-row BEFORE INSERT triggers are reflected in excluded values, since those effects may have contributed to the row being excluded from insertion.

index_column_name
The name of a table_name column. Used to infer arbiter indexes. Follows CREATE INDEX format. SELECT privilege on index_column_name is required.

index_expression
Similar to index_column_name, but used to infer expressions on table_name columns appearing within index definitions (not simple columns). Follows CREATE INDEX format. SELECT privilege on any column appearing within index_expression is required.
collation
When specified, mandates that corresponding index_column_name or index_expression use a particular collation in order to be matched during inference. Typically this is omitted, as collations usually do not affect whether or not a constraint violation occurs. Follows CREATE INDEX format.

opclass
When specified, mandates that corresponding index_column_name or index_expression use particular operator class in order to be matched during inference. Typically this is omitted, as the equality semantics are often equivalent across a type's operator classes anyway, or because it's sufficient to trust that the defined unique indexes have the pertinent definition of equality. Follows CREATE INDEX format.

index_predicate
Used to allow inference of partial unique indexes. Any indexes that satisfy the predicate (which need not actually be partial indexes) can be inferred. Follows CREATE INDEX format. SELECT privilege on any column appearing within index_predicate is required.

constraint_name
Explicitly specifies an arbiter constraint by name, rather than inferring a constraint or index.

condition
An expression that returns a value of type boolean. Only rows for which this expression returns true will be updated, although all rows will be locked when the ON CONFLICT DO UPDATE action is taken. Note that condition is evaluated last, after a conflict has been identified as a candidate to update.

Note that exclusion constraints are not supported as arbiters with ON CONFLICT DO UPDATE. In all cases, only NOT DEFERRABLE constraints and unique indexes are supported as arbiters.

INSERT with an ON CONFLICT DO UPDATE clause is a “deterministic” statement. This means that the command will not be allowed to affect any single existing row more than once; a cardinality violation error will be raised when this situation arises. Rows proposed for insertion should not duplicate each other in terms of attributes constrained by an arbiter index or constraint.


Thursday, September 14, 2017

How to Guidance of Query Tuning PostgreSQL

In this article will provide How to Guidance of Query Tuning PostgreSQL. Now that you just grasp that statements square measure acting poorly and ready see their execution plans, it's time to begin tweaking the question to urge higher performance. this can be wherever you create changes to the queries and/or add indexes to undertake and find a far better execution arrange. begin with the bottlenecks and see if there square measure changes you'll create that scale back prices and/or execution times.


A note concerning knowledge cache and examination apples to apples

As you create changes and valuate Query Tuning PostgreSQL the resuling execution plans to examine if it's higher, it's vital to grasp that future executions can be relying upon knowledge caching that yield the perception of higher results. If you run a question once, create a tweak and run it a second time, it's seemingly it'll run a lot of quicker although the execution arrange isn't additional favorable. this can be as a result of PostgreSQL might need cached knowledge utilized in the primary run and is ready to use it within the second run. Therefore, you ought to run queries a minimum of three times and average the results to check apples to apples.

OK, Here are some how to Guidance of Query Tuning PostgreSQL that will facilitate recover execution plans:

1. Indexes
  • Eliminate serial Scans (Seq Scan) by adding indexes (unless table size is small)
  • If employing a multicolumn index, ensure you listen to order within which you outline the enclosed columns - additional data
  • Try to use indexes that square measure extremely selective on commonly-used knowledge. this can create their use additional economical.

2. WHERE clause
  • Avoid LIKE
  • Avoid operate calls in wherever clause
  • Avoid giant IN() statements

3. JOINs

  • When connection tables, attempt to use an easy equality statement within the ON clause (i.e. a.id = b.person_id). Doing thus permits additional economical be part of techniques to be used (i.e. Hash be part of instead of Nested Loop Join)
  • Convert subqueries to affix statements once potential as this typically permits the optimizer to know the intent and probably selected a far better arrange
  • Use be part ofs properly: square measure you exploitation cluster BY or DISTINCT simply because you're obtaining duplicate results? This typically indicates improper JOIN usage and will end in a better prices
  • If the execution arrange is employing a Hash be part of it is terribly slow if table size estimates square measure wrong. Therefore, ensure your table statistics square measure correct by reviewing your vacuuming strategy
  • Avoid related to subqueries wherever possible; they will considerably increase question value
  • Use EXISTS once checking for existence of rows supported criterion as a result of it “short-circuits” (stops process once it finds a minimum of one match)

4. General pointers
  • Do additional with less; computer hardware is quicker than I/O
  • Utilize Common Table Expressions and temporary tables once you have to be compelled to run bound queries
  • Avoid LOOP statements and like SET operations
  • Avoid COUNT(*) as PostgresSQL will table scans for this (versions <= nine.1 only)
  • Avoid ORDER BY, DISTINCT, GROUP BY, UNION once potential as a result of these cause high startup prices
  • Look for an outsized variance between calculable rows and actual rows within the justify statement. If the count is extremely totally different, the table statistics might be superannuated and PostgreSQL is estimating value exploitation inaccurate statistics. For example: Limit (cost=282.37..302.01 rows=93 width=22) (actual time=34.35..49.59 rows=2203 loops=1). The calculable row count was ninety three and therefore the actual was two,203. Therefore, it's seemingly creating a foul arrange call. you ought to review your vacuuming strategy and guarantee ANALYZE is being run ofttimes enough.

Sunday, August 13, 2017

Porting SQL Syntax, Function from Oracle to PostgreSQL



When We use PostgreSQL as a choice for subtitude/migrate Oracle, we will face there are difference syntax.  Oracle and PostgreSQL both conform to standard SQL. However, they contain several extensions and implementation details that differentiate one from the other. The most important differences are listed in this table below.





ORACLEPOSTGRESQL Explain
select sysdate from dual select ‘now’::datetime There is no “dual” table Unlike other RDBMS, PostgreSQL allows a “select” without the ”from” clause. This use does not affect portability because the syntax to get current time is already DBMS specific
CREATE SEQUENCE seqname [ INCREMENT BY integer ] [ MINVALUE integer ] [ MAXVALUE integer ] [ START WITH integer ] [ CACHE integer ] [ CYCLE | NOCYCLE ] To return the current value and increment the counter: sequence_name.nextval; Possible usage in a select statement: select sequence_name.nextval from dual; CREATE SEQUENCE seqname [ INCREMENT increment ] [ MINVALUE minvalue ] [ MAXVALUE maxvalue ] [ START start ] [ CACHE cache ] [ CYCLE ] To return the current value and increment the counter: nextval(‘sequence_name’); Possible usage in a select statement select nextval(‘sequence_name’); If you don’t specify MAXVALUE, then the maximum value is 2147483647 for ascending sequences. Note that unlike other RDBMS, PostgreSQL allows a select without the ‘from’ clause. This use does not affect portability because the sequence syntax is already DBMS specific
SELECT product_id, DECODE (warehouse_id, 1, ’Southlake’, 2, ’San Francisco’, 3, ’New Jersey’, 4, ’Seattle’, ’Non-domestic’) quantity_on_hand FROM inventories SELECT a, CASE WHEN a=1 THEN 'one' WHEN a=2 THEN 'two' ELSE 'other' END FROM test Converting DECODE to CASE WHEN
select employeeid, NVL(hire_date, sysdate) from employee where employeeid = 10; select employeeid, coalesce(hire_date, 'now'::datetime) from employee where employeeid = 10; Oracle also has a “coalesce” function that is a generalization of the commonly used NVL function
For next sintax will be continued...
Consulting

Thursday, August 10, 2017

4 Tools Database Conversion from Oracle to PostgreSQL

There are some external tools to help migrate Oracle database to PostgreSQL, paid or free. Here, 4 external tool   that can you try.








Database conversion between Oracle (and 30+ other database engines) and PostgreSQL. With everything on localhost, typical throughput over 100k records per second.


Software is able to convert 1 Million of records in 4-5 minutes. Trigger-based database sync method and simultaneous Bi-directional synchronization help you to manage your data easy and efficiently. 


A toolkit migrates Oracle databae to PostgreSQL in wizard. It connects to Oracle and PostgreSQL database directly, and migrate its table structure, data, indexes, primary keys, foreign keys, comments and so on.

Ora2Pg is a Perl module to export an Oracle database schema to a PostgreSQL compatible schema. It connects your Oracle database, extracts its structure, and generates an SQL script that you can load into your PostgreSQL database.

Wednesday, August 9, 2017

How to Understand SQL Index and Where should I use an index

An index is used to speed up searching in the database or An index makes the query fast” is the most basic explanation of an index I have ever seen. Although it describes the most important aspect of an index very well, it is—unfortunately—not sufficient for this book.


OK, Let's Understand the SQL Index. 

Searching in a database index is like searching in a printed telephone directory. The key concept is that all entries are arranged in a well-defined order. Finding data in an ordered data set is fast and easy because the sort order determines each entry's position.

A database index is, however, more complex than a printed directory because it undergoes constant change. Updating a printed directory for every change is impossible for the simple reason that there is no space between existing entries to add new ones. A printed directory bypasses this problem by only handling the accumulated updates with the next printing. An SQL database cannot wait that long. It must process insert, delete and update statements immediately, keeping the index order without moving large amounts of data.




The database combines two data structures to meet the challenge: a doubly linked list and a search tree. These two structures explain most of the database's performance characteristics

And Where should I use an Index ?

An index can be used to efficiently find all row matching some column in your query and then walk through only that subset of the table to find exact matches. If you don't have indexes on any column in the WHERE clause, the SQL server have to walk through the whole table and check every row to see if it matches, which may be a slow operation on big tables.


The index can also be a UNIQUE index, which means that you cannot have duplicate values in that column, or a PRIMARY KEY which in some storage engines defines where in the database file the value is stored.

In POSTGRES  you can use EXPLAIN in front of your SELECT statement to see if your query will make use of any index. This is a good start for troubleshooting performance problems.

#