Sunday, February 22, 2015

Mega Vendors discover in memory technology – Computer Week

From time immemorial, the manufacturers rely on the perfect collaboration between RAM and hard drive to ensure users the fastest possible data access.
Set From time immemorial, the manufacturer the perfect cooperation between RAM and hard drive to ensure users the fastest possible data access
. Photo: Andrea Danti – Fotolia.com

The access to RAM is to faster to a million times than conventional hard drives and even in data throughput figure is still almost a factor of 100. The prices for memory decline per year average of 30 percent. At the same time the processors for almost all architectures are becoming increasingly powerful. However, most access method in databases on the algorithmic state of the 80s and 90s: They focus on the most efficient possible interaction between the hard disk and RAM. But that’s still up to date?

SAP HANA

throw off this legacy, and of course curl around through technical progress the lucrative market of database management systems (DBMS), presented developer of the Hasso Plattner Institute and Stanford University, 2008, the first examples of their relational in-memory database for real-time analysis. First, ironically under the title “Hasso’s New Architecture”, it was created two years later, SAP HANA, which was “High Performance Analytic Appliance”
target the operational capability of a single platform, or even a single dataset -. For both the online application (online transaction processing OLTP =) and for analytical purposes (Online Analytical Processing = OLAP). The developers wanted to eliminate the previously hard and time-consuming separation between operational and business intelligence tasks, creating an extraordinary productivity advantage for using this appliance. Nevertheless HANA 2011 came “only” for the SAP Business Warehouse on the market. Since mid-2013, it was also for operational SAP modules available.

HANA is purposely designed to keep all the data in the main memory. The database uses its very intensive CPU caches, organizes the data column-oriented predominantly – instead of the current practice in rows. In addition, it compresses the data in RAM and on disk and parallelized data operations within the CPU cores in multicore systems and even across multiple computing nodes.



Save columns instead of rows allowed in high compression rates and fast processing - especially in main memory.
Save in columns instead of in lines allows high compression rates and fast processing – especially in main memory
Photo: Trivadis AG

And suddenly turned to existing IT systems in question: Will analytical queries now a million times. faster and maybe even available online? Should we be separated from data warehouses and older DBMS us now? Is the projection built by SAP nor einholbar? The pressure on traditional database vendors such as IBM grew and soon they enriched with its own in-memory solutions to market. To this day, of course, no two approach the other.



IBM DB2 BLU Acceleration

In April 2013, the in-memory function package “BLU Acceleration” as part of the advanced editions of IBM DB2 database published. In principle here come the same techniques as in HANA used. However, IBM integrates them simply in your technique and allows the coexistence of conventional and optimized memory tables within the same database. These tables can be converted from one format to another, and should, according to IBM’s well optimizable queries by a factor of 8-40 accelerate. In addition, up to 90 percent space savings through compression both in RAM and on disk. Unlike HANA BLU Acceleration is today but still clearly focused on analytical workload. In productive use OLTP should therefore continue to be conducted on row-oriented tables

Microsoft SQL Server In-Memory Database Features

Microsoft SQL Server followed the trend. Even with the SQL Server 2012 release could be produced on special tables column-oriented compressed in-memory indexes for complex queries. This made it possible to speed up the analysis considerably. The indices are still created in addition to the normal table and must be created manually after each change – with the new SQL Server 2014, it is, however, upgradable

In version 2014 came up with “in-memory OLTP” a new solution exclusively for the acceleration of transactions on operating systems such as ERPs or CRMs added. Tables of this type have to be stored completely in memory. They allow for transaction-intensive applications while significant performance gains. According to Microsoft, these are depending on the application at a factor of 100 or even higher. Again, of course, there are in-memory tables peacefully alongside conventional tables in the same database and can be combined in almost any way.
So Microsoft has two separate solutions for OLTP and analysis program.



Oracle Database In-memory option

In July 2014 finally moved also by Oracle and equipped its 12c database with a paid in-memory additional option. This consists essentially of an “in-memory column store” to speed up analytical queries, but partly also suitable for OLTP applications due to their design.

The Oracle database works much like IBM BLU Acceleration to achieve performance improvements by a factor 10 to 100 depending on the application. Unlike other solutions, the Oracle database but no in-memory data writes to disk. Column-oriented data management, automatic indexing, compression – all operations take place exclusively in main memory. All Disk relevant operators will be carried out using traditional means and redundant, but consistently pulled for the in-memory structures. This results in one hand, disadvantages due to redundant resource consumption and the lack of compression on disk
the other hand, is also a special advantage of this approach. “In Memory” is for Oracle only one switch, with the online tables or parts can be optimized in-memory tables. So there is no need to migrate data to benefit from new opportunities. This makes a test so very easy

similarities and differences

The thrust of the “Big Three” is clear. The change to a different, separate in-memory HANA platform as intended will no longer be necessary. In the simplest case, the administrator in existing databases Around need only one switch in order to speed up all applications many times. But is that really realistic? It quickly notice that use all the new in-memory databases similar mechanisms. These include the column-oriented data management, automatically generated and all storage-optimized data structures and intensive use of CPU features and compression of data in main memory and / or disk.

But there are also noticeable differences in the individual solutions. First there is the question of limits on database size by the available memory. SAP HANA and the SQL Server OLTP solution to in-memory load data later than the first query completely into the main memory, while Oracle and IBM DB2 does not require this and thus simplifies working with large amounts of data. In addition, Oracle compressed data is not written in compressed form on storage media. You must be rebuilt after a restart of the database again. While this approach provides more flexibility in administration and a wider range of applications, but saves no space on disk and creates redundancies in the processing

And then there’s the kind of usable application types. Microsoft offers special, OLTP however, optimized table types, IBM table types for purely analytical workload. SAP HANA is different sometimes needed between row and column-oriented data management and supports both types of workload. Oracle placed his solution mainly for analytical field, but also promises improvements for OLTP applications because fewer indexes on tables are needed and thus the DML throughput to be improved.



What is in-memory approach the right

For companies are basically two questions:? What in-memory approach is right and what solution should I buy

Possible answers but can not be generalized but are subject to individual scenarios. Companies should in any case all the technical, organizational and cost-related peculiarities differentiated look – both for new development as well as an application migration. You should know that few use cases are provided at no additional adjustments – and they need to invest in appropriate services. In addition, the performance benefits will not be available for all applications and can even slow it down. In some cases you will have to live with functional limitations.

Before implementing an in-memory database, a thorough evaluation should take place both when moving to a new platform as well as a planned migration. Even in the case of a seemingly simple change to the in-memory technology within the familiar RDBMS, it is advisable to identify the suitable data and to check in detail all applications. This can be done either with their own teams or by an external service.



Is it worth the in-memory technology?

For IT decision makers established at the end of the question whether a switch to in-memory technology is worthwhile, even or especially if it is in some cases is only an upgrade of existing systems. The fact is that the solutions in specific application scenarios in both OLTP and in the analytical field much faster and work more efficiently than conventional techniques. However, only very distinct processes to the above-mentioned factors can accelerate – the average of the performance gain will be significantly lower

But let through the in-memory technology, reducing costs for hardware and licenses in many cases.. These cost reductions are, however, visible only after the reaction. For the transition to the new technology is almost always associated with application adjustments – which are included in the overall cost accounting

In-memory databases are a real asset in the database world.. But they are not suitable for every application and should be checked comprehensively with the proverbial binding to a new technology. (Bw)

Newsletter ‘Server + Storage’ now

LikeTweet

No comments:

Post a Comment