ga('create', 'UA-102273033-1', 'auto'); ga('send', 'pageview');

Posts by [email protected]

Improved Database Performance for Parallels Plesk Users

Posted by MySQL NEWS No Comment yet

With a lot of transformation on the online landscape, business is shifting their operations to the online space. On the other hand, many enterprises operate in order to cater to this urge of moving online by setting up new websites, reseller accounts, email accounts, DNS entries etc. through a web-based interface. To be able to provide with these services business enterprises use Parallels Plesk, which is a commercial web hosting platform.

Its control panel enables a server administrator to be able to set up new websites, reseller accounts, email accounts, and DNS entries via a web-based interface. This commercial web hosting data center automation software caters to the enterprises which provide Linux or Windows-based commercial web hosting service. Plesk is widely used by the developers, content managers, IT admins, digital agencies and infrastructure providers.

Importance of database optimization for Parallels Plesk users

Plesk manages the web hosting systems and applications on a single server. This makes it very crucial for the database servers to show a constant high performance without any slowdowns in order to not hinder the ongoing operations. Although expected, an all time high performance is not the reality among the Plesk users.

Plesk installs the custom versions of MySQL and PostgreSQL databases along with the Apache Tomcat Java platform server and ColdFusion server, or Microsoft SQL Server and Microsoft SQL Server Desktop Engine for Windows.

Often when these custom versions are not installed as per the requirements, there are persistent issues which result in the slowdown of the database server and impacts the overall performance.

Thus, to overcome these unwanted slowdowns, Plesk database tuning is often recommended by the database experts so as to optimize server performance. Let’s discuss what are the reasons behind the slowdowns and how Plesk database tuning could help in increasing the overall database server performance.

Plesk database tuning to Optimize server performance

For seamless web hosting services, it is important to maintain the high performance of the servers which often gets affected due to some, or the other reasons. Here are a few reasons causing database slowdowns for Parallels Plesk users and how to optimize the server performance by resolving them:

  • When the custom databases are not properly installed

Plesk requires custom installations of the databases, which might not be everyone’s cup of tea, given the lack of expertise. Any issues during installations could result in a long-term persistent slow performance due to the compatibility of the database servers.

To avoid this issue, it is always advised to consult expert Database Administrators with experience to install the database and perform initial setup. This assures a smooth and seamless functioning of the database servers in future.

  • High CPU usage

There could be a high CPU usage due to the more than the usual processing causing multiple slow queries.

A diligent database administrator could very well not only perform timely health checkups to trace the slow queries but also ensure to resolve them even before any slowdown happens to optimize server performance.

  • Optimize tables

Tables can cause slowdowns as there could be unused space which might not be optimized upon making changes or deleting rows/columns.

Optimizing a database table in Plesk can be very useful in improving the database performance by reusing the old portions of the tables in the linked list where the deleted rows/columns are still maintained. Plesk database tuning can be done by a diligent database professional, who could identify such areas and optimize the overall space utilization in the Plesk database.

The way forward

While Plesk database involves operations on a large scale as a single database server is responsible for hosting a lot of websites, DNS, reseller accounts or email accounts, it is important to optimize the server performance.

Database optimization is a key requirement for the seamless performance of the web hosting platform, and thus it is required to be well maintained with timely server health checks.

With an experience of 18 years as a database administrator, I am determined to ensure that the database is proactively optimized even before any issue occurs. If you are facing any issues with your Plesk Database and are looking for Plesk database tuning, feel free to contact me and get an in-depth consultation to optimize server performance. hqdefault

MongoDB vs Cassandra: Choose the right Database for your enterprise

Posted by MySQL NEWS No Comment yet

MongoDB vs Cassandra: Choose the right Database for your enterprise

Technological advancements have made the businesses go completely digital, which has also led to complete dependence on databases over the previously used hard copy record files. Since the entire data flow in the operations emerges from the database, having a high performing database makes a crucial need for any enterprise.

Choosing the right database makes it important for an enterprise especially to maintain seamless operations without any slowdowns. A diligent Database Administrator (DBA) would always guide you to choose the right database which could help you deliver optimum performance.

With the advancements in technology, databases too, have advanced over time, with a fleet of options available for the end user to choose as per their data and operational requirements. Database comes in different types such as SQL and NoSQL, this makes choosing the right type of database is an important decision.

Such are MongoDB and Cassandra NoSQL databases, which are used as databases to store the enterprise data. Although both of them are NoSQL databases yet they are different from one another and to choose the most suitable one, we have to explore their respective use cases.

Before we explore MongoDB vs Cassandra, to decide which one is more suitable for your enterprise let us first understand the basics of MongoDB and Cassandra individually.


MongoDB is a NoSQL database which is a Document Store Database. These are non-relational databases with the data stored in the form of JSON, BSON or XML document files. This data could be any desired data which is in the form of key-value pairs and metadata.

Document Store Database such as MongoDB is usually considered when the enterprise data which is to be stored is semi-structured in nature. Since the data is stored in the form of documents then the querying can only be performed within a given document rather than performing it across the entire database as it happens in the other structured databases.

MongoDB is also popular among the users for sharding apart from its scalability and caching real-time analytics. MongoDB sharding is a process wherein the data could be stored at multiple distributed platforms wherein different sections of the data is stored to offer high scalability.

MongoDB sharding is done using an index, where different indexes are assigned to different servers sometimes even as the range of values where these servers process the data accordingly.

This sharding is performed using routers which contact config servers to determine where the data should be read or written on the shard servers. MongoDB vs Cassandra2

MongoDB Use Cases

MongoDB could be a great choice if your enterprise needs high scalability and caching of real-time analytics. Though, it might not be ideal for transactional data such as accounting systems.

MongoDB supports rich object data models, these objects can have properties and can be nested to form multiple levels of hierarchy. Also, MongoDB enables you to index these properties, even if they are nested, providing you with a lot of flexibility.

This proves very useful when your data is unstructured or semi-structured. Also, MongoDB is a good option if you are looking for a single write i.e. on the primary as it is a single master model.


Cassandra is also a NoSQL database which is a Wide column store database. This column store database is non-relational in nature stores key-value type data. For Cassandra, the keyspaces are large column families which are used instead of schemas.

Cassandra provides the flexibility where different rows can have different number and types of columns. These databases are ideal for row-based data where the number of columns varies extensively.

Cassandra also offers scalability, as it is possible to deploy it across multiple servers, with an easy to use built-in replication.

Moreover, it allows querying in CQL which is quite similar to SQL which is used in relational databases. MongoDB vs Cassandra

Cassandra Use Cases

Cassandra could be a great choice when it comes to structured or semi-structured data where scalability is highly required. As it has no node failures or downtime, Cassandra is largely recommended if you want 100% availability, as it supports multiple master model. Thus, during the downtime of one master another could take over, resulting in full-time availability.

This is also the reason why Cassandra supports to take multiple writes, and hence with more number of servers you can scale writes.

If you are looking for a complete database querying, then Cassandra could make a good choice as it supports querying with CQL which is quite similar to SQL, thus providing proper language support. MongoDB vs Cassandra1

The final verdict MongoDB vs Cassandra

MongoDB and Cassandra although both are NoSQL, still they are different from one another as Cassandra suits more for semi-structured data, while MongoDB offers lot more flexibility to the unstructured data with its object-oriented approach.

Both the databases have their own areas where their usability is better over another. Having said that, optimum utilization of the database is only possible when the right choice is made.

To choose the right database a diligent database professional analyses all the aspects of the database requirement and the kind of the data which has to be stored.

Given that MongoDB suits enterprises where unstructured data has to be stored at scale and Cassandra where semi-structured data needs to be stored at scale with full-time availability.

Still feeling confused which database to choose for your enterprise? Feel free to contact me and get an sin-depth insightful consultation based on the analysis of your enterprise and data, to choose the right enterprise database for your business. database performance

15: Factors that affect Database Performance

Posted by MySQL NEWS No Comment yet

With every business going online lately, the demand for high capacity and an efficient database has become the need of the hour. To cater to this need it is very crucial to ensure a high performance of the database.

To ensure a high performing database, it is very important to know the factors which affect the Database Performance in one way or the other.

Before we get into the details of the factors affecting database performance, let us first understand what a database performance is and why it is so important.

What is Database Performance?

Database Performance is the speed with which the database responds to the requests of accessing data. Though it might be sounding really simple, it has many layers unto it making it complicated, especially for DBA (Database Administrators), who spend most of their time ensuring a high performance of the database.

Why is Database Performance so important?

What makes Database Performance more important is the dependency of several operations on the data they access from the database. So, any kind of a slowdown from the database could subsequently slow down the entire operations. This would cause the end user to put off from the service.

Thus, Database Performance is substantial for a good user experience.

Factors that affect Database Performance

Here is the list of factors that affect the Database Performance, and which have to be well taken care of for a higher Database Performance.

Workload database performance workload

One of the primary factors which affect the performance of a Database. Workload defines the data access requests sent to the Database by the system or end user. Clearly, a higher number of requests would mean an increased workload, which would require more processing from the Database end.


A workload is a cumulative of online transactions, batch jobs, ad hoc queries, analysis, business

intelligence queries, utilities and system commands which are directed via DBMS (Database

Management System) at any given point of time.


It is a dynamic entity and changes from time to time. While sometimes it could be predicted due to

some possible reasons such as peak business hours etc. but it often remains unpredicted making it a

challenging job for the DBA.


This way in an overall manner workload makes it as one of the primary reasons which largely affect

the Database Performance and thus is often monitored carefully by the diligent Database




Throughput plays an important part in the Database Performance being the measure of the input/output speed of the entire process. This input/output speed depicts whether the queries are responded in a fast or slow manner.


Throughput is not just dependent on the hardware factors such as the CPU speed and capacity but

also relies on the software of the system, such as the Operating System, Mainframes, software

programs, and their integration.


If the hardware and the software perform efficiently, then the throughput of the database is

observed to efficient resulting in better performance of the database.


These play an important role in database performance and affecting the throughput and other factors directly. Efficient hardware and software resources such as CPU, memory, cache controllers and microcode play an important role in a high performance of the Database.

Optimization database performance optimized

An optimized system means a better performing system. This makes optimization of the database a crucial factor in ensuring a higher performance of the database.


An efficient Database Administrator ensures that the SQL queries are well optimized, by following

diligent optimization and database tuning practices. Thereby, writing an optimally efficient code for

better performance.


Contention usually happens when the workload on a database increases and many queries try to access same resources causing system conflicts. This usually results in a slowdown affecting the database performance.


This happens when one of the operations tries to access or update data on which another operation

is already making some changes causing conflicts. To avoid such conflicts during contention, DBMS

ensures a locking mechanism, which locks the access for other users on the piece of data when a

user is already updating it.


A lower contention means a better performance of the Database, due to fewer conflicts and

subsequent slowdowns.

In a nutshell

Improving these factors would definitely result in a high performing database, which would ensure more seamless business operations. Thus, Database Performance is analysed to the behest of business operations.


14: Five Ways to Improve Database Performance

Posted by MySQL NEWS No Comment yet

Five Ways to Improve Database Performance


Have you ever heard of the term ‘Big Data’? Chances are you have and most of the businesses both big and small work with data. Owing to the increasing demand of data, data management is more important than ever. The need to manage data while increasing efficiency has lead to the need for database optimization. This is very important for any Database administrator who aims to improve their database performance.



Because of multiple technologies being curated it important to understand how one can make the most of their database management system. So let’s have a look at some of the most important database performance improvement techniques.


Query Optimization


Managing the data is one thing but the way one fetches the data plays a major role. Most of the times, query optimization is necessary for good database performance. There are times when it is hard to differentiate between subqueries and joins and which one to use.


The efficiency of the query is directly proportional to the performance of the system. This mainly because of the amount of load put in order to fetch the data.


Hence, query optimization is one of the most important database performance improvement techniques.




Index is a major part of the database management system and is a major contributor to the database optimization process. An index basically contains keys to different parts of the table hence making it easier to locate data and decreasing the processing time. In case of a missing index, the search operation will run through the entire set of data in the table and put load on the system. Indexing is also necessary for query optimization which we discussed above.


Memory Allocation


When planning your database performance improvement techniques, memory allocation should be considered based on the realistic data requirements. Memory allocated to a database has a major impact on your system performance. Just like it needs powerful hardware, database needs ample free memory to boost system performance and speed up the processing.


A common method to keep a check on memory availability is through the number of page faults. The higher the number the lower is the memory. This has a negative impact on the database optimization process.



Thus, developers are advised to allocate higher memory to the database (approximately 70% of the total memory in case of the only application on the server). To modify the memory allocated, developers have to access the configuration file and use the innodb_buffer_pool_size key in the .cnf file.


Defragmenting the Data


According to Wikipedia, “ In computer storage, fragmentation is a phenomenon in which storage space is used inefficiently, reducing capacity and performance.” This is a very common scenario in database management systems and thus it is necessary to defragment the data.


Over time, the data stored starts to fragment. This impacts the performance of the system. But running disk defragmentation combines similar data thus decreasing the processing time.


Moreover, defragmentation solves our previously mentioned problems of memory allocation and query optimization.


Disk Type


Choosing the right disk type for your database is essential to the performance of the system.It is related to the hardware part of the optimization process. Just like a faster CPU and higher memory, the hardware properties of the memory disk can improve the performance of the system.


One of the most efficient disk types used are the solid-state drives (SSD). The performance relies highly on the performance of the hardware and the I/O processes will respond to the system proportional to the speed of the disk where the data is stored and secured. Thus, working with a faster disk drive, preferably SSD will help improve your database optimization process.





Database optimization is an ongoing process. There are numerous methods which can be used to speed up the process. Most of these will vary based on the requirements and the data. Thus, above mentioned are some common tips to improve database performance.


I’ve been a database performance expert for almost two decades. I have worked with many organizations, some of which belonged to the fortune 500. Get in touch with me today and we can discuss how I can help you to improve your production database performance. Database Tuning

13: Techniques for Database Optimization in MySQL

Posted by MySQL NEWS No Comment yet

Database lays the foundation for any kind of online operations. As the entire data flow into the system from the database, having a high database performance is important. It is thereby often recommended to opt for Database Optimization for the high performance of the database.

Diligent Database Administrators perform database tuning on a regular basis to keep up the performance of the database. But before we jump on to the MySQL Database optimization techniques, let us first understand what Database Optimization means.

What is Database Optimization?

Database Optimization refers to the techniques used in order to ensure the fastest possible time in which the SQL queries run for a database, resulting in a higher productivity. These techniques involve foreseeing different aspects to ensure an overall good health of the database.

Why is it important to perform Database tuning regularly?

A regular database tuning ensures a faster database access speed. This is done by finding all the possible areas of a slowdown by regular checkups. At the same time, queries are optimized to ensure they run fast and give the desired output.


For smaller databases where the data populated remains considerably less, the time difference between upon running a tuned database and any other database is not significant. But, when the databases are considerably large, then the time lapse between a tuned and an untuned database is significant, making regular database tuning important for overall seamless operations.

Techniques to perform Database tuning in MySQL

Here’s how you can optimize your database for a high performance by using the following MySQL database optimization techniques:

1.   Perform regular server health check-ups

Mysql Database optimization techniques

Server forms the spine of any database, and to keep it in good health, it is important to perform regular health check-ups. This would help you in detecting and rectifying areas causing a slowdown.

You can use tools for database tuning such as Execution Related Dynamic Management Views and Functions (Transact-SQL), which are commonly known as DMVs, to detect slowdowns by checking server health upon execution.

2.   Create optimized Indexes to make queries efficient

Creating efficient and well-optimized indexes is one of the most effective ways of database tuning in MySQL. A well-optimized index, allows a query to avoid unnecessary scanning of the entire tables for the result. This is very beneficial especially when the size of the database is quite large.

SQL server automatically measures and stores the statistical information of the distribution of values in the Indexed column as soon as the Index is created. This information helps in formulating the best optimizing strategy for the database.


3.   Create optimized Select queries

SELECT * (Select All) query often requires much more processing than other Select queries. Thus, it is always advised in order to use specific fields to make the processing lot faster for database tuning.

Also in the case where you want a fixed number of results, you can use LIMIT to get the required number of results and avoid unnecessary processing of the entire database.

These optimized Queries help a lot in database tuning by increasing performance, especially when the size of the database is large.

4.   Avoid using Nested loops in queries

Nested loops in the queries cause slowdown, especially when the data to be processed is large. Since there could not be parallel processing in the nested loops and they access the data only one after the other, it makes the process lengthier.

For database optimization in MySQL, try to avoid nesting loops while structuring queries.

5.   Analyze real-time statistics regularly Database Optimization

For database tuning, it is important to monitor and keep a track on the real-time statistics of the database by using different MySQL database optimization techniques and tools such as Live Query Statistics.

 Live Query Statistics, is very substantial for debugging SQL query performance related issues in real-time, as it displays statistics of step by step query execution. This helps in closely understanding and detecting which steps are causing slowdown by analyzing.

What makes this tool a pro is the statistics are available when the query is still executing, helping the Database administrator to understand and tweak queries where it is required for database tuning to boost the performance.


The next step

While the above MySQL optimization techniques will help you in database tuning, but you need to make sure you practice these basic techniques on a regular basis to ensure an optimized performance of your database.

If you require any further assistance in database performance related issues or an expert consultation, then you feel free to contact me, to help you overcome the challenges. database consultant

12: Top 5 Skills of a Database Consultant

Posted by MySQL NEWS No Comment yet

Big Data has taken over the world by storm. And with focus narrowing to the ever-increasing amount of data, database consultants are more important than ever. Hence, there is an influx of job opportunities for database consultants.

But, with organizations looking out for database professionals, it is very important to understand the right set of skills for a qualified and efficient database consultant. Most of the functions of a Database Administrator are technical in nature. However, there is a set of skills that differentiate the good from the best. Today we look at the top five unique skills of a database consultant and understand their importance in the industry.

Problem Solving abilities

The most basic qualification for any database consultant is the ability to tackle complex problems. Now with such a vast amount of data, it is very important to recognize the patterns being formed. A good database consultant, with problem skills, will not only be able to understand the patterns but also create algorithms and process flows to optimize the output.

While technology has advanced over the years, it is very important to have a human intervention for problem-solving. This human intervention will further reduce any errors and ensure an efficient working of the database.

Data Visualization database consultant visualization
For a database consultant, it is very important to understand the data and create a visual representation of the same. This representation consists of charts, graphs and flows charts. But simple visualization lacks innovation and clarity. A quality of a good DBA is the ability to tell stories through the data.

In the modern day, stories are an essential part of our life. Scrolling through the internet or magazines, stories are the easiest way to communicate. This storytelling through data influences the ways products are created and work. An organization looking for a good database consultant should look for an individual who can inspire the User interface as well as the User experience of the product based on data.

Clear communication

While technical qualifications and a sound understanding of the industry are a must for any database consultant, it is equally important to have a set of soft skills as well. Communication is the key to any successful venture.

When it comes to database management, communication with the client is very important. This isn’t necessarily presentations and Excel sheets but a display of the output. This involves understanding the customer’s needs and presenting it accordingly. A website framework presented on a localhost preview will have a better impact than a PPT. But, it is the key skill of a database consultant to identify these methods and communicate effectively.

Managerial Capabilities

Managing a project is one of the most important skills for any industry. Even for database management, project management plays an important role. Organizations looking for a good database consultant should look out for some key management qualities as a skillset.

Not every individual is capable of all the technical aspects. This means work needs to be divided as per skillsets and capabilities. As a good database consultant, it is necessary to manage both the product as well as the client. Hence, management is a key skill that an organization should seek when looking for a database consultant.

The Hardware/ Software understanding database consultant hardware

Database management is a combination of high-end hardware and software. As the amount of data increases, the requirements increase too. Common issues with data often arise and these are highly dependent on the hardware such as storage devices. The hardware is powered by compatible software.

An understanding of this combination is essential to a successful execution. As a database consultant, it is important to know the why and the how. So understanding how the system works both on the inside and the outside is what makes a perfect database consultant.

So, for any organization looking at a database consultant to take control of their database management system, the above-mentioned skills can help them find the perfect match. Moreover, a combination of all these will lead to an efficient and optimized working. SQL Performance Tuning

11: Getting Started with SQL Performance Tuning

Posted by MySQL NEWS No Comment yet

Sometimes you might have faced the issues of slow SQL performance due to some queries. This slowdown might have also impacted your operations to a major extent. And every time you might have felt the need to improve the performance of your SQL database. Well, you can easily improve the performance of your SQL database by tuning it.


What is SQL Performance Tuning?

SQL performance tuning, are several techniques used to ensure that the queries of the SQL database run in the fastest possible span to give maximum productivity.


Why is SQL Performance Tuning important?

While the smaller databases might not find SQL tuning significant, but when the databases are very huge ranging up to some Terabytes (TBs), then SQL performance tuning plays a major role in cutting down the query runtime resulting in a much-improved efficiency, making operations a lot faster than before.

Thus, for better performing fast operations, SQL performance tuning of databases is important.


How to do SQL Performance Tuning?

Here are a few ways you can use for doing SQL performance tuning of your database to it speed up:

  • Ensure a database server health check-up using DMVs

Use DMVs (Dynamic Management Views), to ensure server health and find the areas of slowdown. It gives the analytics and information about the active sessions and connections, letting you know the objects which consume most memory causing a slowdown.


You can use the below query to know the memory consumption of the given object, to identify areas of slowdown.


Select * from $System.discover_object_memory_usage


  • Avoid using SELECT * instead use SELECT fields

Using SELECT * (Select All) means calling of all the available data in the given database, which is often a stringent and more time-consuming process, often calling fields which are actually not required. Thereby, causing inefficiency and consuming more time.


If you use SELECT fields query that would mean you call only the required fields. This helps in significant optimization especially when the databases have a large number of rows and fields.



FROM Users


Instead, use

SELECT FirstName, LastName, Address, City, State, Zip
FROM Users


  • Avoid using SELECT DISTINCT instead use SELECT for more fields

If you are using SELECT DISTINCT for removing duplicate queries, a lot of processing is required to call the query. As the SELECT DISTINCT query GROUPs all fields to create distinct results, it requires more processing time and can cause a slowdown when the database is significantly large.


Instead, if you opt to remove duplicate fields by calling SELECT for more fields, you tend to get unduplicated records without much processing, optimizing your database performance.



SELECT DISTINCT FirstName, LastName, State
FROM Users


Instead, use

SELECT FirstName, LastName, Address, City, State, Zip
FROM Users


  • Prefer using Parameterized Queries

Using Parameterized Queries can improve the performance of the database, as they compile the query only once while executing the compiled plan multiple times. This can save time for recompiling each time, thereby improving performance.



SELECT “User ID” FROM User WHERE “Sales ID” = ?


  • Prefer INNER JOIN to create joins over WHERE

Using WHERE to create joins result in a Cartesian of CROSS JOIN, where all the possible combinations of variables are created, causing a lot of unnecessary processing. This impacts more especially when the number of variables is more, like that in bigger databases.


While using INNER JOIN can be used to optimize the query as it does not create a Cartesian join.


SELECT Users.UserID, Users.Name, Sales.LastSaleDate
FROM Users, Sales
WHERE Users.UserID = Sales.UserID4


Instead, use

SELECT Users.UserID, Users.Name, Sales.LastSaleDate
FROM Users
ON Users.UserID = Sales.UserID



  • Prefer using LIMIT for sample queries

You can limit the number of results when using sample queries. Using LIMIT will run only the given number of outcomes, instead of calling the complete data.



SELECT FirstName, LastName, Address, City, State, Zip
FROM Users



The Bottom Line

SQL Performance Tuning is very helpful in optimizing database performance significantly by decreasing the processing time, especially for large databases, improving overall efficiency.

If you are stuck with performance improvement, get in touch with me. I will help you unlock the true performance of your database.

SQL Performance Analyzer

10: How to make the most of a SQL Performance Analyzer?

Posted by MySQL NEWS No Comment yet

With the sudden increase in data and its importance, database management systems have become an essential part of any organization, big or small. And in most cases, a DBA is deployed to set-up and maintain a database system. Post initial setup, a DBA spends most of the time monitoring the database. He also constantly works on optimizing the throughput.

But, running an analysis can be tough whenever there’s a change in action or execution. This could include any amendments in the SQL database such as upgrade or addition of indexes. Moreover, these processes can impact your SQL database performance. With the release of Oracle Database 11g, the Real Application Testing option was introduced.

A key feature of the Real Application testing is the SQL Performance Analyzer (SPA). It brought relief to the DBAs with its ability to cut down on the time and effort required to identify slow queries.


How does the SQL Performance Analyzer Work?


An SQL Performance Analyzer offers a granular view of how a change impacts the performance. An SQL performance Analyzer executes the SQL query in isolation and offers the before and after results. It then generates a report with a proper detailing of the effects of the statement on the workload.

In case of a regressive SQL statement, the Analyzer identifies it and repairs it before the execution of the changes to the database such as migration, upgradation or new index introduction. Moreover, running SPA prior to running the database replay feature is one of the best practices which save a lot of time and effort. The main aim is to identify and repair any regression. For this purpose, SPAs make use of the already present SQL Tuning Sets (STS). An STS contains predefined statements, execution information, and execution plans from the workload.


Where is a SQL Performance Analyzer used?


Some common scenarios which need an SQL Performance Analyzer are mentioned below. These are scenarios when changes are made to the database, potentially affecting its performance.


Database upgradation: Whenever a newer version of the database is installed, it leads to the installation of a new optimizer. This affects the system performance. The analyzer is then used to create the before and after view, pointing out to any regressions. This enables the DBA to re-write the required SQL queries prior to the change.

Hardware and Software change: At times, the system is upgraded, meaning additional CPUs or memory. Additionally, new operating systems can be installed. These changes to the system affect the SQL performance. In such scenarios, an SQL Performance Analyzer plays a key role in identifying any deterioration.

Patch Deployment: If a patch to fix issues with the performance is being deployed, it is advisable to run it through the SQL performance analyzer. It checks your production for any errors which might cause regression post-implementation to the workload.

Optimizer Statistics: The plans generated by the system rely heavily on the optimizers’ decisions. These decisions, in return, are affected by Optimizer statistics. Whenever new statistics are collected and the existing refreshed, it can cause an impact on the SQL performance. Hence, preventing any regressions the analyzer keeps a check on the statistics.

Changes to Schema: Whenever a database undergoes schema changes such as addition or change in the index, it will inevitably affect the SQL performance. As this can lead to deterioration of performance, it is important for the SQL Performance Analyzer to keep a check on the same.

The SQL Performance Analyzer process


The execution process of an SPA is very simple. After capturing the SQL workload, and storing it in an SQL test set, the DBA will set up the test target. This is where the analyzer comes into the picture.

An SQL Performance Analyzer task is then assigned to the test system. Using this, the DBA creates a pre-change task followed by the implementation of the change. Once the change has been implemented, a post-change task is created.

The two tasks are compared for any regressive queries or changes which might affect the performance of the SQL system. The analysis will guide the DBA on the optimization process. The process is repeated until the required performance statistics are obtained.


I have been working for almost 2 decades on Database optimizations. I can help you optimize your database to make your application perform better and save costs. Contact me to know more about how I can help you.

09: Introduction to the SQL Join concepts

Posted by MySQL NEWS No Comment yet

Introduction to the SQL Join concepts


As the importance of data continues to grow exponentially, so does the amount of data. And data management is the most important factor for any organization, big or small. Hence, a very important part of normal functioning is a database management system.


Owing to the ever-increasing amount of data, the databases have multiple tables. It is highly unlikely that the user will refer to a single table. Multiple tables need to be accessed. In a relational database management system, a simple way to access multiple tables is through the use of joins. Let’s look at joins and its types with a few examples.

What is a Join?

Whenever data from different tables need to be accessed, joins are used to link data between different rows based on related columns of a database. A simple example would be retrieving different details of an individual such as phone number, address, and age against the Name of an individual.

Why are Joins important?


With the multiple data fields and types, working with multiple tables is necessary. Without the access to joins, it will be extremely difficult to retrieve multiple data entries. They also form the basis for a relational database. For any relational DBMS, it is very important to relate tables. Joins is the perfect solution to that problem.

What are the different types of SQL Joins?


Joins have multiple types of functionalities. And hence, different situations call for different types of joins. There is majorly four type of joins which are used. Let’s have a look at these and understand.

1.    Inner Join

Consider a Venn diagram with two different sets, namely, A and B. The intersection between the two sets A and B (A ∩ B) is the condition of the join. An inner join will consist of the selection of this common area. In technical terms, the inner join will return all the rows that fulfill the condition of the join from two or more tables.


SELECT columns
ON A.columnname = B.columnname

2.    Left  SQL Join

Also known as the left outer join, this is the join which focuses on the left-hand table. Consider a Venn diagram with A and B. The left join will include the output from entire set A and the intersection of A and B (A ∩ B). In a DBMS, a left join will retrieve all rows from the left side table and only those from the right which fulfill the criteria of the join.


SELECT columns
ON A.columnname = B.columnname

Left Outer join without intersection: Another subtype of the left join is where the rows returned include all rows from the left side table which also meet the condition of the SQL join but none of the rows from the right side table fulfilling the conditions of the joins.


SELECT columns
ON A.columnname = B.columnname
WHERE B.columnname IS NULL

3.    Right SQL Join

Also known as right outer join, this conditional join focuses on the right side table. The right join returns all the rows from the right side table and the rows from the left side tables that fulfill the conditions of the join.


SELECT columns
ON A.columnname = B.columnname

Right Outer Join Without intersection: Similar to the left outer join without intersection, this join retrieves all the rows from tables on the right side except those fulfilling the condition of the joint from the left.


SELECT columns
ON A.columnname = B.columnname
WHERE A.columnname IS NULL

4.  Full Join

Also known as Full outer join, retrieves all the values from both the tables mentioned in the ON condition. Rows from tables on either side are retrieved where the join condition is not met. With a Venn diagram, it includes data from the entire set of A and B include the unions. This includes all NULL Values.


SELECT columns
ON A.columnname = B.columnname

Full Join without intersection: Similar to a full SQL join, this retrieves all the rows from either side of the tables mentioned in the ON condition where the join condition is not met. But, this excludes all the NULL Values, unlike a FULL OUTER JOIN.


SELECT columns
ON A.columnname = B.columnname
WHERE A.columnname IS NULL
OR B.columnname is NULL


The above article explained the different types of SQL joins used in the relational DBMS environment. But, it also emphasizes on the fact that joins are an essential part of the RDBMS as they enable the relational part of the system by interconnecting different tables and their data. MySQL Query

#8: How to Make the most out of a MySQL Query?

Posted by MySQL NEWS No Comment yet

Database Management systems have evolved over the years, yet it is crucial to stick to the basics to optimize their performance. From a developer’s perspective, one of the most critical components for an efficient, database-driven application is an SQL Query.


One ill-written SQL Query can not only slow down the entire database but also affect other components of the application. Good observation and statistical analysis are critical, but, MySQL comes loaded with specific tools and techniques to optimize SQL Queries and improve the overall time of the database. Let’s have a look at some of these MySQL query techniques.



Assumptions are limited to certain conditions. Hence, numbers are required to support your results and give you clarity on the efficiency. Benchmarking points out the issues in the queries such as bottlenecks, and bad queries.


With the help of certain profiling tools such as SuperSmack and SysBench, it is possible to simulate high-stress situations and find faults in the database. This process creates the pathway to solution through simulation.

Profiling a MySQL Query

Post the simulation of high-stress situations through the benchmarking process, it is important to point out the cracks in the database. This can be done using profiling. The primary function of the profiling process is to point to the bottlenecks present in the CPU, memory, disk I/O, network, and/or any other component.

After the identification of slow queries, use of internal MySQL tools like EXPLAIN, STATUS, and SHOW will give you information regarding the use of resources and their side effects. This process helps identify the slow queries.

Schema Designing

This step is a pre-prep for query execution. A schema design needs to be completed before any query can even be written. When creating a schema, keeping in mind the details such as memory requirements and the type of data to be entered can help optimize MySQL query execution.

This process of normalizing a database, unlike a popular opinion of creating a complex schema, leads to minimizing redundancy of the data, hence speeding up the query. To make the most of your MySQL query, normalizing is essential. Moreover, if required, de-normalization can be processed later.



A query is only as good as the choice of indices, so selecting the index is very important. An index, usually a B-tree, speeds up the process of a query. If a query is executed without any index, then the DBMS must scan every row of the table and compare it with the given condition. However, MySQL query can be optimized, by merely using an index.

During the process, one needs to maintain a balance. Not only do indexes take up memory space, but also affect the write performance as they have to be updated after every ‘write’ operation. Profiling helps to understand this balance, which will vary from situation to situation.

Partitioning the tables

For any database, it is essential to identify and differentiate between frequently used data and otherwise. A common example could be a blog where data such as ‘title of the topic’ might be used more regularly than other content. Now, based on this differentiation, if partitioning is performed the infrequently used data will take up less memory and help optimize the query executed.

When the data is kept in separate tables, the caching process is simplified. The reason being that MySQL will no longer have to change cache for the unaffected data.

MySQL queries might seem extremely simple or extremely complex, but some simple tips mentioned above can change the entire perspective and give a more efficient MySQL database.


I have almost 2 decades of experience in MySQL database tuning and performance. Get in touch with me and I can show you how you can improve your application performance.