Oracle DB Identity Columns

In this article I’ll talk about identity column option, which is one of the new features that have come with Oracle DB 12c R1. I hope this will be a helpful article in terms of awareness.

1966595It’s always been the major of our complaints that we cannot set Primary Key to rise automatically as built-in while creating a table. We solved this either by adding a manuel from sequence to the table during insert or by defining a trigger for our table and rising p.k column automatically with the help of a sequence during insert. In Oracle 12c R1 the infrastructure to make this operation automatic was developed and issued. Now, we’re in a position in which we can create numeric columns that can automatically rise.
Identity column type was developed in a way that will support three different features on the background. We can choose and use whichever suits us the best.

These features are;

1- GENERATED ALWAYS AS IDENTITY

2- GENERATED BY DEFAULT AS IDENTITY

3- GENERATED BY DEFAULT ON NULL AS IDENTITY

Before viewing the details of these features, there are certain restrictions for columns created in identity type. We definitely need to take into consideration the following situations while creating this kind of columns.

  • Only one identity column can exist in one table.
  • The identity column we’ll create must definitely be defined in the type of numeric data.
  • We can’t define default value for a column we created in identity type.
  • If there’s another identity column in a table we copied using create table as select, the identity feature of this column cannot be moved to the newly-created table.
  • The moment we define a column in identity type, NOT NULL constraint for this column will be defined as implicit.

Now, let’s analyze identity column kinds of different features by giving examples.

1- GENERATED ALWAYS AS IDENTITY:  We can’t externally interfere with this identity column we created using this feature. Now, let’s see it with an example.

CREATE TABLE tab_new_identity_col_table (
  tab_pk      NUMBER GENERATED ALWAYS AS IDENTITY,
  tab_name    VARCHAR2(50),
  tab_prop    NUMBER,
  tab_add     VARCHAR2(50)  
);


insert into tab_new_identity_col_table(tab_pk,tab_name,tab_prop,tab_add) values (10,'deneme',99,'dene12'); 

insert into tab_new_identity_col_table(tab_name,tab_prop,tab_add) values ('deneme',99,'dene12'); 

Now, after having run the block of code above, let’s view the running logs.

Table TAB_NEW_IDENTITY_COL_TABLE created.

Error starting at line : 9 in command -
insert into tab_new_identity_col_table(tab_pk,tab_name,tab_prop,tab_add) values (10,'deneme',99,'dene12')
Error at Command Line : 9 Column : 40
Error report -
SQL Error: ORA-32795: cannot insert into a generated always identity column
32795.0000 -  "cannot insert into a generated always identity column"
*Cause:    An attempt was made to insert a value into an identity column
           created with GENERATED ALWAYS keywords.
*Action:   A generated always identity column cannot be directly inserted.
           Instead, the associated sequence generator must provide the value.

1 row inserted.

As we would reason from the running logs, we got an error by the system in the first insert we performed; because we sent external value to the identity column. However, the second phrase ran successfully and our identity column started by automatically rising since there was no such an interference in the second insert phrase.

2ww

2- GENERATED BY DEFAULT AS IDENTITY:With this feature, we can fill our identity column with an external value, yet we still can’t choose null. Now, let’s analyze this, too.

 

CREATE TABLE tab_new_identity_col_table (
  tab_pk      NUMBER GENERATED BY DEFAULT AS IDENTITY,
  tab_name    VARCHAR2(50),
  tab_prop    NUMBER,
  tab_add     VARCHAR2(50)  
);


insert into tab_new_identity_col_table(tab_pk,tab_name,tab_prop,tab_add) values (1,'deneme',99,'dene12'); 

insert into tab_new_identity_col_table(tab_name,tab_prop,tab_add) values ('deneme',99,'dene12'); 

Now that we ran the block of code above, let’s view the running logs.

Table TAB_NEW_IDENTITY_COL_TABLE created.


1 row inserted.


1 row inserted.

As it’s seen, the second phrase was inserted successfully. However, because the value we gave manually was the same as the first value which identity column automatically produced, the first values of the two records are the same. If the column we defined in identity type were the primary key, the second phrase would get an error for running over p.k constraint.

fnl1

3- GENERATED BY DEFAULT ON NULL AS IDENTITY:If we create our identity column using this option, we can ensure that column value keeps rising automatically without any mistakes.

CREATE TABLE tab_new_identity_col_table (
  tab_pk      NUMBER GENERATED BY DEFAULT ON NULL AS IDENTITY,
  tab_name    VARCHAR2(50),
  tab_prop    NUMBER,
  tab_add     VARCHAR2(50)  
);


insert into tab_new_identity_col_table(tab_pk,tab_name,tab_prop,tab_add) values (null,'deneme',99,'dene12'); 

insert into tab_new_identity_col_table(tab_name,tab_prop,tab_add) values ('deneme',99,'dene12'); 

Since we ran the block of code, let’s now view the running logs.

Table TAB_NEW_IDENTITY_COL_TABLE created.


1 row inserted.


1 row inserted.

We observe that both of the insert phrases ran successfully. Now, let’s take a glance at the state of our table.

fnl2We analyzed how it can be used with three different features. Now, let’s mention what’s happening on the background.

The moment we created an identity column, it automatically creates a sequence for this column that will produce the automatic value.
Afterwards, it carries out the operation of producing value from this sequence with every new record, and writing it to the table. You can get at the sequence produced for our identity column by entering the data dictionary the query below.

 

select * from ALL_TAB_IDENTITY_COLS;

fnl3

As you would understand from this dictionary query, we displayed that a sequence was created on the background. This might beg the question: Can we specialize the sequence’s running principle while we are creating the identity column? In other words, can we make the sequence advance by increasing by five? The answer is yes. Let’s demonstrate it in the example below:

CREATE TABLE tab_new_identity_col_table (
  tab_pk      NUMBER GENERATED ALWAYS AS IDENTITY(START WITH 1 INCREMENT BY 5),
  tab_name    VARCHAR2(50),
  tab_prop    NUMBER,
  tab_add     VARCHAR2(50)  
);


insert into tab_new_identity_col_table(tab_name,tab_prop,tab_add) values ('deneme',99,'dene12'); 

insert into tab_new_identity_col_table(tab_name,tab_prop,tab_add) values ('deneme',99,'dene12'); 

insert into tab_new_identity_col_table(tab_name,tab_prop,tab_add) values ('deneme',99,'dene12'); 

fnl4 As one can see, we can specialize the sequence which produces the identity column.
In this article, I analyzed deeply the identity column, which is one of the new characteristics that have come with 12c, and its features. I hope this has been a helpful article for your awareness.

REFERENCES

Posted in Oracle, Root | Tagged , , , , , | Leave a comment

After Oracle Open World 2015

oow15

Hi Everyone,

Last week, I was at Oracle Open World 2015 to obtain Oracle Database Developer Choice Award. Also I got around and tried to benefit from the event as much as I can. Oracle Open World is held in last week of October every year in San Francisco where also Oracle is headquartered. It could be the world’s biggest IT event if I’m not mistaken. It’s actually an IT madness which lasts for 5 days and includes over 1000 presentations, numerous product demonstrations and over 50 thousand people from all over the world. You can have the opportunity of meeting people of your product group and share ideas with them which might be the biggest positive side of it.

As there are too many presentations and events in Oracle Open World, It is not possible in means of time to follow all of them. Therefore you are able to follow only the news from your own domain and some interesting presentations which I did so and could be able to follow the presentations about Oracle Data Integration, Oracle Database Development, SQL, PL/SQL and Big Data. The main emphasis in these presentations was mostly on the new features that have come with the Oracle’s 12c product and among these features, the in-memory subject which was with the largest investment was the hottest topic of each presentation. It seems that as the increase of data amount and the requirement of perform business works in real-time will force big gamers to invest more in in-memory solutions. With the release of Oracle’s 12c version about this topic, we can say that  it made big investments about satisfy the customers’ needs as it became supportive in both mid-tier solutions and in DB level. The fact that in almost every presentation there was an emphasis on in-memory, proves how much Oracle cares this topic. In parallel to that it wouldn’t be false statement that there were also hot topics in Real-Time analytics, NoSQL and Big Data solutions.

In Oracle Open World 2015 especially in Keynote talks, the main topic was ,as everyone could guess, Cloud and what may happen in upcoming 10 years of projection. The main topic in both Larry Ellison’s, the founder of Oracle, and Marc Hurd’s talk, the CEO of Oracle, was Cloud, Cloud Security and best practices that the companies made with the Oracle’s solution in that domain. It seems that we are entering an era where we will hear about Cloud solutions more frequently and where the companies in our country will not stay indifferent to this trend. On this account, as professionals working at IT field, in order to enter this era well-prepared I should say that we have to do a technical investment to ourselves.

Attending to global events such as Oracle Open World is very nice in means of understanding what is being talked about in the world and which problems are being worked on. I hope that we from time to time, catch these kind of opportunities and also catch up with the sector.

Full Session List: https://events.rainfocus.com/oow15/catalog/oracle.jsp?search.event=openworldEvent

Now I put an end to my article with a few photos I’ve taken in Oracle Open World.
I hope this has been an helpful post in means of awareness.

7   8

9  10

14  15

16  22

32  42

52  111

62

121  131

 

Posted in Oracle, Root | Tagged , , , , , | Leave a comment

Oracle Database Developer Choice Awards, SQL Category Winner!!!

Hi Everyone,
maxresdefault I nominated for Oracle Database Developement Choice Awards from Turkey approximately five months ago. Then I learnt to be a finalist from Oracle Technology Network on 15th October. Then voting started to determine winners for each category. I finished voting at first place thanks to Turkish community supports.
Winners were announced by Oracle at San Francisco (Oracle Open World 2015, YesSQLCelebration)  last week. I was to be there and I took this award from Steven Feuerstein and Andy Mendelsohn(Executive Vice President @Oracle). As I said in Awards ceremony in San Francisco, I am so honered to be a winners and I would like to express my gratitude to all communities who supports me (Especially TROUG ve TurkeyJUG). Hopefully, more awards like this come to our country and we may have the chance to show our capabilities all aorund the world.

prize1 2 3 4 5 6

 

 

 

 

 

 

Now, lets watch the Award Ceremony (26th October 2015, San Francisco @USA) (ps.:My part start from 36th miniute)

Posted in Uncategorized | Tagged , , , , , | 1 Comment

Oracle Database Developer Choice Awards

Hi Everyone,

awards_categoriesToday, I recieved very good news from Oracle, USA. Both I want to share this news  and I expect your support for my achivements. I had the final list in “The SQL developer of the 2015” prize which is given by Oracle worldwide. I passed the very difficult elemination stage  and now I’m on the list of final. The result will be determined by voting by the community. To receive this distinguished award, I am waiting for your support.

Vote Link: https://community.oracle.com/community/database/awards/sql-voting

My Short Nomination Letter;

Emrah is a contributive community member, especially for Turkish Oracle professionals. He is one of the core founding members of TROUG (Turkish Oracle User Group) as well as having an active role in TROUG non-profit organisation, held “Database Development SIG Chairman” for 2 periods and contributed – and still continues to contribute – actively on local Oracle related conferences as a speaker or organiser. Because of his studies, he is well known by the Oracle Community and his shares are followed by the almost whole Oracle Community in Turkey. He is currently involving in large or very large scale projects using his prior expertise of Oracle SQL, PL/SQL and Oracle Data Integrator (ODI) and these projects adding him deep knowledge in also Oracle database architecture as well. He is especially known as one of the Oracle SQL and Oracle PL/SQL experts in Turkey. Not only his colleagues, but also his blog or social media followers are consulting to him about complex database queries, query optimisation and Oracle related technical questions and he never gave up answering them. He is maintaining and writing his blogs, only about Oracle technologies, which is about 5 years old (https://emrahmete.wordpress.com/ in Turkish https://emrahmeteen.wordpress.com/ in English). He prefers to provide his contents in Turkish, in order local Oracle professionals can understand better and communicate them in ease. To me, his blog is so helpful and satisfying in terms of content. This is one of the most followed Oracle based blogs in Turkey region.

Posted in Uncategorized | Leave a comment

APPROX_COUNT_DISTINCT

Hi everyone,
In this article I’ll talk about the APPROX_COUNT_DISTINCT command. Hopefully, this will be a helpful article in terms of awareness.

Both in routine data tests that we make, and when analyzing data, we use COUNT(DISTINCT …) command most of the time. Operating time of this command can take longer depending on the size of our data. The main reason for this is that distinct command causes a sort operation on the background. As is known, sort operation is one of the basic operations that can last the longest on DB level according to data size.
Oracle has generated an alternative -and faster- way for software developers and analysts by building up APPROX_COUNT_ DISTINCT command in order to avoid the situation of COUNT(DISTINCT …) operation in which operation’s time gets extended due to data size. This command works faster; however, sometimes it cannot produce a definite result like COUNT(DISTINCT …). According to data size, it swiftly finds the result through deviations which might be ignored.

Now, let’s observe comparatively the performance of this command in respect of data size.

TEST1:
First, we perform our test with 1000 input.

CREATE TABLE appxcnt AS
SELECT mod(LEVEL,10) AS groupno,
  'TestData'
  ||LEVEL AS datacontent
FROM dual
  CONNECT BY LEVEL <1000;

Now that we created our table, we can start testing it in both ways.

SET TIMING ON;
SELECT count(DISTINCT groupno) FROM appxcnt;

1

SET TIMING ON;
SELECT approx_count_distinct(groupno) FROM appxcnt;

2
Yes, we can say that no difference has occurred for 1000 input. Both operations ended with the same time frame, and as almost correctly calculated.

TEST2: 
Now let’s perform our test with 1,000,000 input and with a different SQL, and view the result.

CREATE TABLE appxcnt AS
SELECT mod(LEVEL,39182) AS groupno,
  'TestData'
  ||LEVEL AS datacontent
FROM dual
  CONNECT BY LEVEL <1000000;

We created our table. Now, we can start testing it in both ways.

SET TIMING ON;
SELECT count(DISTINCT groupno) FROM appxcnt;

3

SET TIMING ON;
SELECT approx_count_distinct(groupno) FROM appxcnt;

4

In this test our count(Distinct) query worked longer as expected, and detected the result correctly. Our APPROX_COUNT_ DISTINCT query worked waster; however, it couldn’t detect the result quite accurately, it was able to calculate the result with an approximate estimation.
As it would be deduced from our tests, it will not be an erroneous remark to say that the gap of operation times between the two functions will enlarge as data size increases, and APPROX_COUNT_DISTINCT command will draw away from real distinct value to some extent.

APPROX_COUNT_DISTINCT function can work with data types except for BFILE, BLOB, LONG, NCLOB, LONG RAW, CLOB as a parameter. And it attains a value of NUMBER type ignoring NULL values while estimating distinct value.

Ultimately, I am sharing Explain Plans of the two functions so that it gives an idea, and helps your interpretation.

COUNT(Distinct)
l1

APPROX_COUNT_DISTINCT
l2

You can follow the links I share under “references” section for more detailed test results and information. I hope it has been a useful article concerning awareness.

REFERENCES
http://docs.oracle.com/database/121/SQLRF/functions013.htm#SQLRF56900

Posted in Oracle, Root | Tagged , , , , | Leave a comment

Oracle 12c JSON Support

Hi everyone,

In this article I’ll mention JSON support which has come with Oracle DB 12c 12.1.0.2

I hope this will be a useful article in terms of awareness.

With its new version, Oracle Oracle 12c 12.1.0.2 is now capable of knowing and querying JSON data. I liked this feature because we would usually need to write some extra things in order to query and analyze application logs after saving them to DB. With this feature now there’s no need for such an effort.
Now, let’s take a look at what we can do over an example. Let’s start with creating a table first.

create table applicationLog
(
appId number,
appName varchar2(30),
appLog clob,
  CONSTRAINT applicationLogCons CHECK (appLog IS JSON)
);

We created our table. What we need to pay attention to here is the last row in which we defined row constraint. At this point we told Oracle DB that we would save JSON data in appLog column. From now on, the system knows that here will be kept JSON data.

Now let’s add a few rows.

 

insert into applicationLog values (1,'WebContent','{
		"ad"  				: "emrah",
		"soyad"				: "mete",
		"tckn"				: "147456741232",
		"dogumTarihi"		: "01.01.1987",
		"bolum"				: "Bilgisayar Muhendisligi",
		"okul"				: "Yildiz Teknik Universitesi",
		"sinif"				: "Mezun",
		"adres"				: {	"ilce"	:"kadikoy",
								"sokak"	:"yurt",
								"numara":"56",
								"il"	:"Istanbul"}
}
');

insert into applicationLog values (2,'WebContent','{
		"ad"  				: "ali",
		"soyad"				: "veli",
		"tckn"				: "98752145232",
		"dogumTarihi"		: "01.01.1989",
		"bolum"				: "Bilgisayar Muhendisligi",
		"okul"				: "İstanbul Teknik Universitesi",
		"sinif"				: "Mezun",
		"adres"				: {	"ilce"	:"kartal",
								"sokak"	:"sehirci",
								"numara":"789",
								"il"	:"Istanbul"}
}
');

insert into applicationLog values (3,'WebContent','{
		"ad"  				: "ahmet",
		"soyad"				: "mehmet",
		"tckn"				: "21352145232",
		"dogumTarihi"		: "01.01.1989",
		"bolum"				: "Bilgisayar Muhendisligi",
		"okul"				: "Orta Doğu Teknik Universitesi",
		"sinif"				: "Mezun",
		"adres"				: {	"ilce"	:"maltepe",
								"sokak"	:"sakinlik",
								"numara":"12",
								"il"	:"Istanbul"}
}
');
commit;

We have added three-row registry and JSON data exist in all of their appLog columns. If there had been a registry which did not fit this format, the system would’ve given an error.

Now, let’s query our data over appLog column and see how we will insert in our query the elements in JSON data. Say we want our query to put the registries that include “Yildiz Teknik Üniversitesi” in the element OKUL, which resides in JSON data in appLog column.

select  * from applicationLog t1 where t1.applog.okul='Yildiznik Universitesi';

121

As we can see in the result, we managed to perform a query over JSON data. The point which needs our attention is that we can use each element in appLog as appLog”.”

Now, let’s write one more query data example to practice the usage better. This query will bring the “tckn” information of the “maltepe” registries which are in the element ilce, which is in the element adres, and which resides in the data in appLog column.

33

By defining a constraint we said that we would keep JSON data in the appLog column while creating our table in the example above. Now, let’s see how we can query JSON data in a table in which there is no constraint defined.

First, I’ll create a table and I’ll insert one registry with JSON data, and another one without JSON data.

create table applicationLog2
(
appId number,
appName varchar2(30),
appLog clob
);

insert into applicationLog2 values (1,'WebContent','{
		"ad"  				: "emrah",
		"soyad"				: "mete",
		"tckn"				: "147456741232",
		"dogumTarihi"		: "01.01.1987",
		"bolum"				: "Bilgisayar Muhendisligi",
		"okul"				: "Yildiz Teknik Universitesi",
		"sinif"				: "Mezun",
		"adres"				: {	"ilce"	:"kadikoy",
								"sokak"	:"yurt",
								"numara":"56",
								"il"	:"Istanbul"}
}
');

insert into applicationLog2 values (2,'WebContent','Bu data JSON degil');

commit;

Now, let’s find registries which have JSON data in appLog column.

select  * from applicationLog2 t1 where t1.appLog is JSON;

untitled

We’ve listed registries which include JSON data in appLog column by using the phrase “IS JSON” and now let’s see how registries which include JSON data are used in queries.

select  JSON_VALUE(t1.appLog, '$.ad') from applicationLog2 t1 where t1.appLog is JSON;

With the command “JSON_VALUE” we managed to actively insert in the query the elements of the data in appLog.

In the same way, we could’ve used it like this in where clause.

select  * from applicationLog2 t1 where t1.appLog is JSON and JSON_VALUE(t1.appLog, '$.ad') = 'emrah';

untitled

As we can comprehend from the examples above, JSON support which has arrived with Oracle 12 12.1.0.2 makes it much easier to process especially application logs. You can follow the links below for more detailed information on this topic. I hope this has been a helpful article in terms of awareness.

REFERENCES
https://docs.oracle.com/database/121/ADXDB/json.htm

Posted in Uncategorized | Tagged , , , , , , | Leave a comment

Oracle Pattern Matching

Hi everyone,

In this article I’ll talk about a new SQL feature which has come with Oracle 12c.
I hope this will be a useful article in terms of awareness.

In fact, it’s been on my mind for a while to write about this topic; however, I just couldn’t get the chance because of my job. Though, I did give a presentation about this topic at TROUG Datawarehouse SIG event at ITU in 2014. Those who attended and watched took advantage of that opportunity, and I think this article will come in handy for others.

Oracle’s pattern matching feature carries some qualifications which fortify Oracle’s already potent analytic query infrastructure all the more. Thanks to this feature, we can create a strong input for our business by displaying it with queries in which we’ll write the trend of the data we have.

We can use Pattern Matching feature for such business needs as below:

  • Financial Implementations
  • Analysis of the fluctuation of product prices against time
  • Analysis of the fluctuation of common stocks against time and taking action
  • Spotting of unusual movements in security implementations and taking action
  • Spotting of fraud
  • Time-wise inspections of data coming from sensors
  • In cases that require Complex Event Processing

At times when this feature didn’t exist, we weren’t able to write only one query to perform these kinds of analyses due to the insufficient technology of the day. We could reach these results only by carrying out a series of operations and transformations, and this used to take time and effort to some extent. With this feature however, we can analyze our data with only one query in a very time-efficient and easy way, and report the results.

Now, let’s follow a sample case in order to grasp how this mechanism works.

Let there be data that includes the amount of hourly downloads of a company.

1

Graphically demonstration of this data is as follows:

2

Now, let’s explain our case. Because there would be no one, an increase of download is not expected at our company at night. I want to come up with an alarm for the case when the amount of downloads successively increases for 5 hours. Actually, generating an alarm is the second step; we will not realize that part. What matters is that we can understand if the amount downloaded rose constantly in the last five-hour period.

The trend I’m looking for is shown more clearly:

3

My trend starts with an escalation, keeps rising successively for five hours, and indicates a five-point trend.

Now, let’s see how we can realize technically.

First, we create our table, and insert our data shown above.

CREATE TABLE NETWORK_TRAFIC
(
   TRAFIC_TYPE          VARCHAR2 (10),
   TRAFIC_DATE          TIMESTAMP,
   TRANSFER_AMOUNT_MB   NUMBER
);


INSERT INTO NETWORK_TRAFIC (TRAFIC_TYPE, TRAFIC_DATE, TRANSFER_AMOUNT_MB)
        VALUES (
                  'Download',
                  TO_DATE ('10.01.2015 01:00:00', 'DD.MM.YYYY HH:MI:SS'),
                  10);

INSERT INTO NETWORK_TRAFIC (TRAFIC_TYPE, TRAFIC_DATE, TRANSFER_AMOUNT_MB)
        VALUES (
                  'Download',
                  TO_DATE ('10.01.2015 02:00:00', 'DD.MM.YYYY HH:MI:SS'),
                  5);

INSERT INTO NETWORK_TRAFIC (TRAFIC_TYPE, TRAFIC_DATE, TRANSFER_AMOUNT_MB)
        VALUES (
                  'Download',
                  TO_DATE ('10.01.2015 03:00:00', 'DD.MM.YYYY HH:MI:SS'),
                  3);

INSERT INTO NETWORK_TRAFIC (TRAFIC_TYPE, TRAFIC_DATE, TRANSFER_AMOUNT_MB)
        VALUES (
                  'Download',
                  TO_DATE ('10.01.2015 04:00:00', 'DD.MM.YYYY HH:MI:SS'),
                  7);

INSERT INTO NETWORK_TRAFIC (TRAFIC_TYPE, TRAFIC_DATE, TRANSFER_AMOUNT_MB)
        VALUES (
                  'Download',
                  TO_DATE ('10.01.2015 05:00:00', 'DD.MM.YYYY HH:MI:SS'),
                  8);

INSERT INTO NETWORK_TRAFIC (TRAFIC_TYPE, TRAFIC_DATE, TRANSFER_AMOUNT_MB)
        VALUES (
                  'Download',
                  TO_DATE ('10.01.2015 05:00:00', 'DD.MM.YYYY HH:MI:SS'),
                  15);

INSERT INTO NETWORK_TRAFIC (TRAFIC_TYPE, TRAFIC_DATE, TRANSFER_AMOUNT_MB)
        VALUES (
                  'Download',
                  TO_DATE ('10.01.2015 06:00:00', 'DD.MM.YYYY HH:MI:SS'),
                  25);

INSERT INTO NETWORK_TRAFIC (TRAFIC_TYPE, TRAFIC_DATE, TRANSFER_AMOUNT_MB)
        VALUES (
                  'Download',
                  TO_DATE ('10.01.2015 07:00:00', 'DD.MM.YYYY HH:MI:SS'),
                  50);

INSERT INTO NETWORK_TRAFIC (TRAFIC_TYPE, TRAFIC_DATE, TRANSFER_AMOUNT_MB)
        VALUES (
                  'Download',
                  TO_DATE ('10.01.2015 08:00:00', 'DD.MM.YYYY HH:MI:SS'),
                  2);

COMMIT;

Now, we write our query which will catch this trend. And then we start explaining the technical details of the query.

  SELECT *
    FROM NETWORK_TRAFIC
            MATCH_RECOGNIZE (
               PARTITION BY TRAFIC_TYPE
               ORDER BY TRAFIC_DATE
               MEASURES 
                      STRT.TRAFIC_DATE AS START_TSTAMP,
                      LAST (UP.TRAFIC_DATE) AS END_TSTAMP
               ONE ROW PER MATCH
               AFTER MATCH SKIP TO LAST UP
               PATTERN (STRT UP UP UP UP UP DOWN)
               DEFINE   
   DOWN AS DOWN.TRANSFER_AMOUNT_MB PREV (UP.TRANSFER_AMOUNT_MB))MR
   ORDER BY MR.TRAFIC_TYPE, MR.START_TSTAMP;

There! When we run the query above, we can get this result below.

4

As the consequence of query, our data caught the trend we had given with our query, and showed the result. We can interpret the retrieved result as follows: Our trend started at 3am and ended at 7am. The result wouldn’t have been retrieved if there hadn’t been a set which was fitting the pattern.

Now, let’s explain the bold-written parts of our query:

partition by: this is the part in which we explain how we split the data logically. We used transfer_type field here.
order by: here, we explained which fields we employed to order every logical group. We used trafic_date field here because we took flux as history.
measures: we determined the metrics, namely the colons which would occur as a result of our query. There was a starting point, and an ending point. We completed these.
one row per match: we command with this phrase that it will produce only one row of record in response to a set fitting our pattern.
after match skip to last up: the first point from which you’ll start in order to form a new trend after every match is not the last ‘up’ point of the trend.
pattern: we described here exactly which trend we were looking for.
(STRT UP UP UP UP UP DOWN)
we describe here the trend we’re looking for.

Although the usage of this new feature seems pretty challenging at first, I tested how it can be written in a fast way when it’s accustomed to. It’s a command with a bit too many parameters, but it saves us the trouble of dealing with lots of stages about trend analysis, and we acquire a result which has a performance at the end of the day. We tried to get across the topic by realizing a very simple implementation. And you can practice by taking this sample as a model. You can effortlessly determine the ‘V’ trend or ‘W’ trend of the data you have. I hope this has been a helpful article.

P.S. I recommend you to follow the links I share above for more detailed information and examples.

http://www.oracle.com/ocom/groups/public/@otn/documents/webcontent/1965433.pdf

REFERENCES:

http://docs.oracle.com/http://www.oracle.com/ocom/groups/public/@otn/documents/webcontent/1965433.pdf

Posted in Oracle, Root | Tagged , , , , , , | Leave a comment