Monday, September 22, 2008

200+ Placement MySQL Interview Questions and Answers


What's MySQL ?

MySQL (pronounced "my ess cue el") is an open source relational database management system (RDBMS) that uses Structured Query Language (SQL), the most popular language for adding, accessing, and processing data in a database. Because it is open source, anyone can download MySQL and tailor it to their needs in accordance with the general public license. MySQL is noted mainly for its speed, reliability, and flexibility. ...

What is DDL, DML and DCL ?
If you look at the large variety of SQL commands, they can be divided into three large subgroups. Data Definition Language deals with database schemas and descriptions of how the data should reside in the database, therefore language statements like CREATE TABLE or ALTER TABLE belong to DDL. DML deals with data manipulation, and therefore includes most common SQL statements such SELECT, INSERT, etc. Data Control Language includes commands such as GRANT, and mostly concerns with rights, permissions and other controls of the database system.

How do you get the number of rows affected by query?
SELECT COUNT (user_id) FROM users would only return the number of user_id’s.

If the value in the column is repeatable, how do you find out the unique values?
Use DISTINCT in the query, such as SELECT DISTINCT user_firstname FROM users; You can also ask for a number of distinct values by saying SELECT COUNT (DISTINCT user_firstname) FROM users;

How do you return the a hundred books starting from 25th?
SELECT book_title FROM books LIMIT 25, 100. The first number in LIMIT is the offset, the second is the number.

You wrote a search engine that should retrieve 10 results at a time, but at the same time you’d like to know how many rows there’re total. How do you display that to the user?
SELECT SQL_CALC_FOUND_ROWS page_title FROM web_pages LIMIT 1,10; SELECT FOUND_ROWS(); The second query (not that COUNT() is never used) will tell you how many results there’re total, so you can display a phrase "Found 13,450,600 results, displaying 1-10". Note that FOUND_ROWS does not pay attention to the LIMITs you specified and always returns the total number of rows affected by query.

How would you write a query to select all teams that won either 2, 4, 6 or 8 games?
SELECT team_name FROM teams WHERE team_won IN (2, 4, 6, 8)

How would you select all the users, whose phone number is null?
SELECT user_name FROM users WHERE ISNULL(user_phonenumber);

What does this query mean: SELECT user_name, user_isp FROM users LEFT JOIN isps USING (user_id) ?
It’s equivalent to saying SELECT user_name, user_isp FROM users LEFT JOIN isps WHERE users.user_id=isps.user_id

How do you find out which auto increment was assigned on the last insert?

SELECT LAST_INSERT_ID() will return the last value assigned by the auto_increment function. Note that you don’t have to specify the table name.

What does –i-am-a-dummy flag to do when starting MySQL?
Makes the MySQL engine refuse UPDATE and DELETE commands where the WHERE clause is not present.

On executing the DELETE statement I keep getting the error about foreign key constraint failing. What do I do?
What it means is that so of the data that you’re trying to delete is still alive in another table. Like if you have a table for universities and a table for students, which contains the ID of the university they go to, running a delete on a university table will fail if the students table still contains people enrolled at that university. Proper way to do it would be to delete the offending data first, and then delete the university in question. Quick way would involve running SET foreign_key_checks=0 before the DELETE command, and setting the parameter back to 1 after the DELETE is done. If your foreign key was formulated with ON DELETE CASCADE, the data in dependent tables will be removed automatically.

When would you use ORDER BY in DELETE statement?
When you’re not deleting by row ID. Such as in DELETE FROM techpreparation_com_questions ORDER BY timestamp LIMIT 1. This will delete the most recently posted question in the table techpreparation_com_questions.

How can you see all indexes defined for a table?
SHOW INDEX FROM techpreparation_questions;

How would you change a column from VARCHAR(10) to VARCHAR(50)?
ALTER TABLE techpreparation_questions CHANGE techpreparation_content techpreparation_CONTENT VARCHAR(50).

How would you delete a column?
ALTER TABLE techpreparation_answers DROP answer_user_id।

How would you change a table to InnoDB?
ALTER TABLE techpreparation_questions ENGINE innodb;

When you create a table, and then run SHOW CREATE TABLE on it, you occasionally get different results than what you typed in. What does MySQL modify in your newly created tables?
1. VARCHARs with length less than 4 become CHARs
2. CHARs with length more than 3 become VARCHARs.
3. NOT NULL gets added to the columns declared as PRIMARY KEYs
4. Default values such as NULL are specified for each column

How do I find out all databases starting with ‘tech’ to which I have access to?
SHOW DATABASES LIKE ‘tech%’;

How do you concatenate strings in MySQL?
CONCAT (string1, string2, string3)

How do you get a portion of a string?
SELECT SUBSTR(title, 1, 10) from techpreparation_questions;

What’s the difference between CHAR_LENGTH and LENGTH?
The first is, naturally, the character count. The second is byte count. For the Latin characters the numbers are the same, but they’re not the same for Unicode and other encodings.

How do you convert a string to UTF-8?

SELECT (techpreparation_question USING utf8);

What do % and _ mean inside LIKE statement?
% corresponds to 0 or more characters, _ is exactly one character.

What does + mean in REGEXP?
At least one character. Appendix G. Regular Expressions from MySQL manual is worth perusing before the interview.

How do you get the month from a timestamp?
SELECT MONTH(techpreparation_timestamp) from techpreparation_questions;

How do you offload the time/date handling to MySQL?
SELECT DATE_FORMAT(techpreparation_timestamp, ‘%Y-%m-%d’) from techpreparation_questions; A similar TIME_FORMAT function deals with time.

How do you add three minutes to a date?
ADDDATE(techpreparation_publication_date, INTERVAL 3 MINUTE)

What’s the difference between Unix timestamps and MySQL timestamps?
Internally Unix timestamps are stored as 32-bit integers, while MySQL timestamps are stored in a similar manner, but represented in readable YYYY-MM-DD HH:MM:SS format.

How do you convert between Unix timestamps and MySQL timestamps?
UNIX_TIMESTAMP converts from MySQL timestamp to Unix timestamp, FROM_UNIXTIME converts from Unix timestamp to MySQL timestamp.

What are ENUMs used for in MySQL?
You can limit the possible values that go into the table। CREATE TABLE months (month ENUM ‘January’, ‘February’, ‘March’,…); INSERT months VALUES (’April’);

How are ENUMs and SETs represented internally?
As unique integers representing the powers of two, due to storage optimizations.

How do you start and stop MySQL on Windows?
net start MySQL, net stop MySQL

How do you start MySQL on Linux?
/etc/init.d/mysql start

Explain the difference between mysql and mysql interfaces in PHP?
mysqli is the object-oriented version of mysql library functions.

What’s the default port for MySQL Server?
3306

What does tee command do in MySQL?
tee followed by a filename turns on MySQL logging to a specified file. It can be stopped by command note.

Can you save your connection settings to a conf file?
Yes, and name it ~/.my.conf. You might want to change the permissions on the file to 600, so that it’s not readable by others.

How do you change a password for an existing user via mysqladmin?
mysqladmin -u root -p password "newpassword"

Use mysqldump to create a copy of the database?
mysqldump -h mysqlhost -u username -p mydatabasename > dbdump.sql

Have you ever used MySQL Administrator and MySQL Query Browser?
Describe the tasks you accomplished with these tools.

What are some good ideas regarding user security in MySQL?
There is no user without a password. There is no user without a user name. There is no user whose Host column contains % (which here indicates that the user can log in from anywhere in the network or the Internet). There are as few users as possible (in the ideal case only root) who have unrestricted access.

Explain the difference between MyISAM Static and MyISAM Dynamic. ?
In MyISAM static all the fields have fixed width. The Dynamic MyISAM table would include fields such as TEXT, BLOB, etc. to accommodate the data types with various lengths. MyISAM Static would be easier to restore in case of corruption, since even though you might lose some data, you know exactly where to look for the beginning of the next record.

What does myisamchk do?
It compressed the MyISAM tables, which reduces their disk usage.

Explain advantages of InnoDB over MyISAM?
Row-level locking, transactions, foreign key constraints and crash recovery.

Explain advantages of MyISAM over InnoDB?
Much more conservative approach to disk space management - each MyISAM table is stored in a separate file, which could be compressed then with myisamchk if needed। With InnoDB the tables are stored in tablespace, and not much further optimization is possible. All data except for TEXT and BLOB can occupy 8,000 bytes at most. No full text indexing is available for InnoDB. TRhe COUNT(*)s execute slower than in MyISAM due to tablespace complexity.

What are HEAP tables in MySQL?
HEAP tables are in-memory. They are usually used for high-speed temporary storage. No TEXT or BLOB fields are allowed within HEAP tables. You can only use the comparison operators = and <=>. HEAP tables do not support AUTO_INCREMENT. Indexes must be NOT NULL.

How do you control the max size of a HEAP table?
MySQL config variable max_heap_table_size.

What are CSV tables?
Those are the special tables, data for which is saved into comma-separated values files. They cannot be indexed.

Explain federated tables. ?
Introduced in MySQL 5.0, federated tables allow access to the tables located on other databases on other servers.

What is SERIAL data type in MySQL?
BIGINT NOT NULL PRIMARY KEY AUTO_INCREMENT

What happens when the column is set to AUTO INCREMENT and you reach the maximum value for that table?
It stops incrementing. It does not overflow to 0 to prevent data losses, but further inserts are going to produce an error, since the key has been used already.

Explain the difference between BOOL, TINYINT and BIT. ?
Prior to MySQL 5.0.3: those are all synonyms. After MySQL 5.0.3: BIT data type can store 8 bytes of data and should be used for binary data.

Explain the difference between FLOAT, DOUBLE and REAL. ?
FLOATs store floating point numbers with 8 place accuracy and take up 4 bytes. DOUBLEs store floating point numbers with 16 place accuracy and take up 8 bytes. REAL is a synonym of FLOAT for now.

If you specify the data type as DECIMAL (5,2), what’s the range of values that can go in this table?
999.99 to -99.99. Note that with the negative number the minus sign is considered one of the digits.

What happens if a table has one column defined as TIMESTAMP?
That field gets the current timestamp whenever the row gets altered.

But what if you really want to store the timestamp data, such as the publication date of the article?
Create two columns of type TIMESTAMP and use the second one for your real data.

Explain data type TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP ?
The column exhibits the same behavior as a single timestamp column in a table with no other timestamp columns.

What does TIMESTAMP ON UPDATE CURRENT_TIMESTAMP data type do?
On initialization places a zero in that column, on future updates puts the current value of the timestamp in.

Explain TIMESTAMP DEFAULT ‘2006:09:02 17:38:44? ON UPDATE CURRENT_TIMESTAMP. ?
A default value is used on initialization, a current timestamp is inserted on update of the row.

If I created a column with data type VARCHAR(3), what would I expect to see in MySQL table?
CHAR(3), since MySQL automatically adjusted the data type।

General Information About MySQL
MySQL is a very fast, multi-threaded, multi-user, and robust SQL (Structured Query Language) database server.

MySQL is free software. It is licensed with the GNU GENERAL PUBLIC LICENSE http://www.gnu.org/.

What Is MySQL
MySQL, the most popular Open Source SQL database, is provided by MySQL AB. MySQL AB is a commercial company that builds is business providing services around the MySQL database. See section 1.2 What Is MySQL AB.

ySQL is a database management system.
A database is a structured collection of data. It may be anything from a simple shopping list to a picture gallery or the vast amounts of information in a corporate network. To add, access, and process data stored in a computer database, you need a database management system such as MySQL. Since computers are very good at handling large amounts of data, database management plays a central role in computing, as stand-alone utilities, or as parts of other applications.

MySQL is a relational database management system.
A relational database stores data in separate tables rather than putting all the data in one big storeroom. This adds speed and flexibility. The tables are linked by defined relations making it possible to combine data from several tables on request. The SQL part of MySQL stands for "Structured Query Language" - the most common standardized language used to access databases.

MySQL is Open Source Software.
Open source means that it is possible for anyone to use and modify. Anybody can download MySQL from the Internet and use it without paying anything. Anybody so inclined can study the source code and change it to fit their needs. MySQL uses the GPL (GNU General Public License) http://www.gnu.org, to define what you may and may not do with the software in different situations. If you feel uncomfortable with the GPL or need to embed MySQL into a commercial application you can buy a commercially licensed version from us.

Why use MySQL?
MySQL is very fast, reliable, and easy to use. If that is what you are looking for, you should give it a try. MySQL also has a very practical set of features developed in very close cooperation with our users. You can find a performance comparison of MySQL to some other database managers on our benchmark page. See section 12.7 Using Your Own Benchmarks. MySQL was originally developed to handle very large databases much faster than existing solutions and has been successfully used in highly demanding production environments for several years. Though under constant development, MySQL today offers a rich and very useful set of functions. The connectivity, speed, and security make MySQL highly suited for accessing databases on the Internet.

The technical features of MySQL
For advanced technical information, see section 7 MySQL Language Reference. MySQL is a client/server system that consists of a multi-threaded SQL server that supports different backends, several different client programs and libraries, administrative tools, and a programming interface. We also provide MySQL as a multi-threaded library which you can link into your application to get a smaller, faster, easier to manage product. MySQL has a lot of contributed software available.

It is very likely that you will find that your favorite application/language already supports MySQL. The official way to pronounce MySQL is ``My Ess Que Ell'' (not MY-SEQUEL). But we try to avoid correcting people who say MY-SEQUEL.

The Main Features of MySQL
The following list describes some of the important characteristics of MySQL:

Fully multi-threaded using kernel threads. That means it can easily use multiple CPUs if available.
C, C++, Eiffel, Java, Perl, PHP, Python and Tcl APIs.
Works on many different platforms.
Many column types: signed/unsigned integers 1, 2, 3, 4, and 8 bytes long, FLOAT, DOUBLE, CHAR, VARCHAR, TEXT, BLOB, DATE, TIME, DATETIME, TIMESTAMP, YEAR, SET, and ENUM types.
Very fast joins using an optimized one-sweep multi-join.
Full operator and function support in the SELECT and WHERE parts of queries. Example:
mysql> SELECT CONCAT(first_name, " ", last_name) FROM tbl_name
WHERE income/dependents > 10000 AND age > 30;

SQL functions are implemented through a highly optimized class library and should be as fast as they can get! Usually there shouldn't be any memory allocation at all after query initialization.
Full support for SQL GROUP BY and ORDER BY clauses. Support for group functions (COUNT(), COUNT(DISTINCT), AVG(), STD(), SUM(), MAX() and MIN()).
Support for LEFT OUTER JOIN and RIGHT OUTER JOIN with ANSI SQL and ODBC syntax.
You can mix tables from different databases in the same query (as of Version 3.22).
A privilege and password system that is very flexible and secure and allows host-based verification. Passwords are secure because all password traffic is encrypted when you connect to a server.
ODBC (Open-DataBase-Connectivity) support for Win32 (with source). All ODBC 2.5 functions and many others. For example, you can use MS Access to connect to your MySQL server. See section 18 MySQL ODBC Support.
Very fast B-tree disk tables with index compression.
Up to 32 indexes per table are allowed. Each index may consist of 1 to 16 columns or parts of columns. The maximum index length is 500 bytes (this may be changed when compiling MySQL). An index may use a prefix of a CHAR or VARCHAR field. Fixed-length and variable-length records.
In-memory hash tables which are used as temporary tables.
Handles large databases. We are using MySQL with some databases that contain 50,000,000 records and we know of users that uses MySQL with 60,000 tables and about 5,000,000,000 rows
All columns have default values. You can use INSERT to insert a subset of a table's columns; those columns that are not explicitly given values are set to their default values.
Uses GNU Automake, Autoconf, and libtool for portability.
Written in C and C++. Tested with a broad range of different compilers.
A very fast thread-based memory allocation system.
No memory leaks. Tested with a commercial memory leakage detector (purify).
Includes myisamchk, a very fast utility for table checking, optimization, and repair. See section 15 Maintaining a MySQL Installation.
Full support for several different character sets, including ISO-8859-1 (Latin1), big5, ujis, and more. For example, the Scandinavian characters `@ringaccent{a}', `@"a' and `@"o' are allowed in table and column names.
All data are saved in the chosen character set. All comparisons for normal string columns are case insensitive.
Sorting is done according to the chosen character set (the Swedish way by default). It is possible to change this when the MySQL server is started up. To see an example of very advanced sorting, look at the Czech sorting code. MySQL supports many different character sets that can be specified at compile and run time.
Aliases on tables and columns are allowed as in the SQL92 standard.
DELETE, INSERT, REPLACE, and UPDATE return how many rows were changed (affected). It is possible to return the number of rows matched instead by setting a flag when connecting to the server.
Function names do not clash with table or column names. For example, ABS is a valid column name. The only restriction is that for a function call, no spaces are allowed between the function name and the `(' that follows it. See section 7.39 Is MySQL Picky About Reserved Words?.
All MySQL programs can be invoked with the --help or -? options to obtain online assistance.
The server can provide error messages to clients in many languages। See section 10.1 What Languages Are Supported by MySQL?. Clients may connect to the MySQL server using TCP/IP Sockets, Unix Sockets (Unixes), or Named Pipes (NT). The MySQL-specific SHOW command can be used to retrieve information about databases, tables, and indexes. The EXPLAIN command can be used to determine how the optimizer resolves a query.

Database Basics
Databases are managed by a relational database management system (RDBMS). An RDBMS supports a database language to create and delete databases and to manage and search data. The database language used in almost all DBMSs is SQL, a set of statements that define and manipulate data. After creating a database, the most common SQL statements used are INSERT, UPDATE, DELETE, and SELECT, which add, change, remove, and search data in a database, respectively.

Database

A repository to store data.

Table

The part of a database that stores the data. A table has columns or attributes, and the data stored in rows.

Attributes

The columns in a table. All rows in table entities have the same attributes. For example, a customer table might have the attributes name, address, and city. Each attribute has a data type such as string, integer, or date.

Rows

The data entries in a table. Rows contain values for each attribute. For example, a row in a customer table might contain the values "Matthew Richardson," "Punt Road," and "Richmond." Rows are also known as records.

Relational model

A model that uses tables to store data and manage the relationship between tables.

Relational database management system

A software system that manages data in a database and is based on the relational model. DBMSs have several components described in detail in Chapter 1.

SQL

A query language that interacts with a DBMS. SQL is a set of statements to manage databases, tables, and data.

Constraints

Restrictions or limitations on tables and attributes. For example, a wine can be produced only by one winery, an order for wine can't exist if it isn't associated with a customer, having a name attribute could be mandatory for a customer.

Primary key

One or more attributes that contain values that uniquely identify each row. For example, a customer table might have the primary key of cust ID. The cust ID attribute is then assigned a unique value for each customer. A primary key is a constraint of most tables.

Index

A data structure used for fast access to rows in a table. An index is usually built for the primary key of each table and can then be used to quickly find a particular row. Indexes are also defined and built for other attributes when those attributes are frequently used in queries.

Entity-relationship modeling

A technique used to describe the real-world data in terms of entities, attributes, and relationships.

Normalized database

A correctly designed database that is created from an ER model. There are different types or levels of normalization, and a third-normal form database is generally regarded as being an acceptably designed relational database.

MySQL Command Interpreter
The MySQL command interpreter is commonly used to create databases and tables in web database applications and to test queries. Throughout the remainder of this chapter we discuss the SQL statements for managing a database. All these statements can be directly entered into the command interpreter and executed. The statements can also be included in server-side PHP scripts, as discussed in later chapters.

Once the MySQL DBMS server is running, the command interpreter can be used. The command interpreter can be run using the following command from the shell, assuming you've created a user hugh with a password shhh:

% /usr/local/bin/mysql -uhugh -pshhh The shell prompt is represented here as a percentage character, %.

Running the command interpreter displays the output:

Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 36 to server version: 3.22.38

Type 'help' for help.

mysql>
The command interpreter displays a mysql> prompt and, after executing any command or statement, it redisplays the prompt. For example, you might issue the statement:

mysql> SELECT NOW( );
This statement reports the time and date by producing the following output:

+---------------------+
| NOW( ) |
+---------------------+
| 2002-01-01 13:48:07 |
+---------------------+
1 row in set (0.00 sec)

mysql>
After running a statement, the interpreter redisplays the mysql> prompt. We discuss the SELECT statement later in this chapter.

As with all other SQL statements, the SELECT statement ends in a semicolon. Almost all SQL command interpreters permit any amount of whitespace—spaces, tabs, or carriage returns—in SQL statements, and they check syntax and execute statements only after encountering a semicolon that is followed by a press of the Enter key. We have used uppercase for the SQL statements throughout this book. However, any mix of upper- and lowercase is equivalent.

On startup, the command interpreter encourages the use of the help command. Typing help produces a list of commands that are native to the MySQL interpreter and that aren't part of SQL. All non-SQL commands can be entered without the terminating semicolon, but the semicolon can be included without causing an error.

The MySQL command interpreter allows flexible entry of commands and SQL statements:

The up and down arrow keys allow previously entered commands and statements to be browsed and used.

The interpreter has command completion. If you type the first few characters of a string that has previously been entered and press the Tab key, the interpreter automatically completes the command. For example, if wines is typed and the Tab key pressed, the command interpreter outputs winestore, assuming the word winestore has been previously used.

If there's more than one option that begins with the characters entered, or you wish the strings that match the characters to be displayed, press the Tab key twice to show all matches. You can then enter additional characters to remove any ambiguity and press the Tab key again for command completion.

Several common statements and commands are pre-stored, including most of the SQL keywords discussed in this chapter.

To use the default text editor to create SQL statements, enter the command edit in the interpreter. This invokes the editor defined by the EDITOR shell environment variable. When the editor is exited, the MySQL command interpreter reads, parses, and runs the file created in the editor.

When the interpreter is quit and run again later, the history of commands and statements is kept. It is still possible to scroll up using the up arrow and to execute commands and statements that were entered earlier.

You can run commands and SQL statements without actually launching the MySQL command interpreter. For example, to run SELECT now( ) from the Linux shell, enter the following command:

mysql -ppassword -e "SELECT now( );" This is particularly useful for adding SQL commands to shell or other scripts।

Installing a MySQL Binary Distribution
You need the following tools to install a MySQL binary distribution:
GNU gunzip to uncompress the distribution.
A reasonable tar to unpack the distribution. GNU tar is known to work. Sun tar is known to have problems.
An alternative installation method under Linux is to use RPM (RedHat Package Manager) distributions.

If you run into problems, PLEASE ALWAYS USE mysqlbug when posting questions to mysql@lists.mysql.com. Even if the problem isn't a bug, mysqlbug gathers system information that will help others solve your problem. By not using mysqlbug, you lessen the likelihood of getting a solution to your problem! You will find mysqlbug in the `bin' directory after you unpack the distribution.

The basic commands you must execute to install and use a MySQL binary distribution are:

shell> groupadd mysql
shell> useradd -g mysql mysql
shell> cd /usr/local
shell> gunzip < /path/to/mysql-VERSION-OS.tar.gz | tar xvf - shell> ln -s mysql-VERSION-OS mysql
shell> cd mysql
shell> scripts/mysql_install_db
shell> chown -R mysql /usr/local/mysql
shell> chgrp -R mysql /usr/local/mysql
shell> bin/safe_mysqld --user=mysql &


You can add new users using the bin/mysql_setpermission script if you install the DBI and Msql-Mysql-modules Perl modules.
A more detailed description follows.

Pick the directory under which you want to unpack the distribution, and move into it. In the example below, we unpack the distribution under `/usr/local' and create a directory `/usr/local/mysql' into which MySQL is installed. (The following instructions therefore assume you have permission to create files in `/usr/local'. If that directory is protected, you will need to perform the installation as root.)

How to Get MySQL. MySQL binary distributions are provided as compressed tar archives and have names like `mysql-VERSION-OS.tar.gz', where VERSION is a number (for example, 3.21.15), and OS indicates the type of operating system for which the distribution is intended (for example, pc-linux-gnu-i586). Add a user and group for mysqld to run as:
shell> groupadd mysql
shell> useradd -g mysql mysql

These commands add the mysql group and the mysql user. The syntax for useradd and groupadd may differ slightly on different Unixes. They may also be called adduser and addgroup. You may wish to call the user and group something else instead of mysql.
Change into the intended installation directory:
shell> cd /usr/local>
Unpack the distribution and create the installation directory:
shell> gunzip < /path/to/mysql-VERSION-OS.tar.gz | tar xvf - shell> ln -s mysql-VERSION-OS mysql

The first command creates a directory named `mysql-VERSION-OS'. The second command makes a symbolic link to that directory. This lets you refer more easily to the installation directory as `/usr/local/mysql'.
Change into the installation directory:
shell> cd mysql

You will find several files and subdirectories in the mysql directory. The most important for installation purposes are the `bin' and `scripts' subdirectories.
`bin'
This directory contains client programs and the server You should add the full pathname of this directory to your PATH environment variable so that your shell finds the MySQL programs properly.
`scripts'
This directory contains the mysql_install_db script used to initialize the server access permissions.
If you would like to use mysqlaccess and have the MySQL distribution in some nonstandard place, you must change the location where mysqlaccess expects to find the mysql client. Edit the `bin/mysqlaccess' script at approximately line 18. Search for a line that looks like this:
$MYSQL = '/usr/local/bin/mysql'; # path to mysql executable

Change the path to reflect the location where mysql actually is stored on your system. If you do not do this, you will get a Broken pipe error when you run mysqlaccess.
Create the MySQL grant tables (necessary only if you haven't installed MySQL before):
shell> scripts/mysql_install_db

Note that MySQL versions older than Version 3.22.10 started the MySQL server when you run mysql_install_db. This is no longer true! Change ownership of the installation directory to the user that you will run mysqld as:
shell> chown -R mysql /usr/local/mysql
shell> chgrp -R mysql /usr/local/mysql

The first command changes the owner attribute of the files to the mysql user, and the second changes the group attribute to the mysql group.
If you would like MySQL to start automatically when you boot your machine, you can copy support-files/mysql.server to the location where your system has its startup files. More information can be found in the support-files/mysql.server script itself.
After everything has been unpacked and installed, you should initialize and test your distribution.

You can start the MySQL server with the following command:

shell> bin/safe_mysqld --user=mysql &

MySQL - Quick Installation Overview
The basic commands you must execute to install a MySQL source distribution are:

shell> groupadd mysql
shell> useradd -g mysql mysql
shell> gunzip <> cd mysql-VERSION
shell> ./configure --prefix=/usr/local/mysql
shell> make
shell> make install
shell> scripts/mysql_install_db
shell> chown -R mysql /usr/local/mysql
shell> chgrp -R mysql /usr/local/mysql
shell> /usr/local/mysql/bin/safe_mysqld --user=mysql &

If you start from a source RPM, then do the following:

shell> rpm --rebuild MySQL-VERSION.src.rpm

This will make a binary RPM that you can install.

You can add new users using the bin/mysql_setpermission script if you install the DBI and Msql-Mysql-modules Perl modules.
A more detailed description follows.

Pick the directory under which you want to unpack the distribution, and move into it.
If you are interested in using Berkeley DB tables with MySQL, you will need to obtain a patched version of the Berkeley DB source code. Please read the chapter on Berkeley DB tables before proceeding.
MySQL source distributions are provided as compressed tar archives and have names like `mysql-VERSION.tar.gz', where VERSION is a number like 3.23.33.
Add a user and group for mysqld to run as:
shell> groupadd mysql
shell> useradd -g mysql mysql

These commands add the mysql group, and the mysql user. The syntax for useradd and groupadd may differ slightly on different Unixes. They may also be called adduser and addgroup. You may wish to call the user and group something else instead of mysql.
Unpack the distribution into the current directory:
shell> gunzip < /path/to/mysql-VERSION.tar.gz | tar xvf - This command creates a directory named `mysql-VERSION'. Change into the top-level directory of the unpacked distribution: shell> cd mysql-VERSION

Note that currently you must configure and build MySQL from this top-level directory. You can not build it in a different directory.
Configure the release and compile everything:
shell> ./configure --prefix=/usr/local/mysql
shell> make

When you run configure, you might want to specify some options. Run ./configure --help for a list of options. If configure fails, and you are going to send mail to
mysql@lists.mysql.com to ask for assistance, please include any lines from `config.log' that you think can help solve the problem. Also include the last couple of lines of output from configure if configure aborts. Post the bug report using the mysqlbug script.
Install everything:
shell> make install

You might need to run this command as root.
Create the MySQL grant tables (necessary only if you haven't installed MySQL before):
shell> scripts/mysql_install_db

Note that MySQL versions older than Version 3.22.10 started the MySQL server when you run mysql_install_db. This is no longer true!
Change ownership of the installation to the user that you will run mysqld as:
shell> chown -R mysql /usr/local/mysql
shell> chgrp -R mysql /usr/local/mysql

The first command changes the owner attribute of the files to the mysql user, and the second changes the group attribute to the mysql group.

If you would like MySQL to start automatically when you boot your machine, you can copy support-files/mysql.server to the location where your system has its startup files. More information can be found in the support-files/mysql.server script itself.

After everything has been installed, you should initialize and test your distribution:
shell> /usr/local/mysql/bin/safe_mysqld --user=mysql &

If that command fails immediately with mysqld daemon ended then you can find some information in the file `mysql-data-directory/'hostname'।err'. The likely reason is that you already have another mysqld server running.

MySQL - MySQL Extensions to ANSI SQL92
MySQL includes some extensions that you probably will not find in other SQL databases. Be warned that if you use them, your code will not be portable to other SQL servers. In some cases, you can write code that includes MySQL extensions, but is still portable, by using comments of the form /*! ... */. In this case, MySQL will parse and execute the code within the comment as it would any other MySQL statement, but other SQL servers will ignore the extensions. For example:

SELECT /*! STRAIGHT_JOIN */ col_name FROM table1,table2 WHERE ...

If you add a version number after the '!', the syntax will only be executed if the MySQL version is equal to or newer than the used version number:

CREATE /*!32302 TEMPORARY */ TABLE (a int);

The above means that if you have Version 3.23.02 or newer, then MySQL will use the TEMPORARY keyword.

MySQL extensions are listed below:

The field types MEDIUMINT, SET, ENUM, and the different BLOB and TEXT types.
The field attributes AUTO_INCREMENT, BINARY, NULL, UNSIGNED, and ZEROFILL.
All string comparisons are case insensitive by default, with sort ordering determined by the current character set (ISO-8859-1 Latin1 by default). If you don't like this, you should declare your columns with the BINARY attribute or use the BINARY cast, which causes comparisons to be done according to the ASCII order used on the MySQL server host.
MySQL maps each database to a directory under the MySQL data directory, and tables within a database to filenames in the database directory. This has a few implications:
Database names and table names are case sensitive in MySQL on operating systems that have case-sensitive filenames (like most Unix systems).
Database, table, index, column, or alias names may begin with a digit (but may not consist solely of digits).
You can use standard system commands to backup, rename, move, delete, and copy tables. For example, to rename a table, rename the `.MYD', `.MYI', and `.frm' files to which the table corresponds.
In SQL statements, you can access tables from different databases with the db_name.tbl_name syntax. Some SQL servers provide the same functionality but call this User space. MySQL doesn't support tablespaces as in: create table ralph.my_table...IN my_tablespace.
LIKE is allowed on numeric columns.
Use of INTO OUTFILE and STRAIGHT_JOIN in a SELECT statement.
The SQL_SMALL_RESULT option in a SELECT statement.
EXPLAIN SELECT to get a description on how tables are joined.
Use of index names, indexes on a prefix of a field, and use of INDEX or KEY in a CREATE TABLE statement.
Use of TEMPORARY or IF NOT EXISTS with CREATE TABLE.
Use of COUNT(DISTINCT list) where 'list' is more than one element.
Use of CHANGE col_name, DROP col_name, or DROP INDEX, IGNORE or RENAME in an ALTER TABLE statement.
Use of RENAME TABLE.
Use of multiple ADD, ALTER, DROP, or CHANGE clauses in an ALTER TABLE statement.
Use of DROP TABLE with the keywords IF EXISTS.
You can drop multiple tables with a single DROP TABLE statement.
The LIMIT clause of the DELETE statement.
The DELAYED clause of the INSERT and REPLACE statements.
The LOW_PRIORITY clause of the INSERT, REPLACE, DELETE, and UPDATE statements.
Use of LOAD DATA INFILE. In many cases, this syntax is compatible with Oracle's LOAD DATA INFILE.
The ANALYZE TABLE, CHECK TABLE, OPTIMIZE TABLE, and REPAIR TABLE statements.
The SHOW statement.
Strings may be enclosed by either `"' or `'', not just by `''.
Use of the escape `\' character.
The SET OPTION statement.
You don't need to name all selected columns in the GROUP BY part. This gives better performance for some very specific, but quite normal queries.
One can specify ASC and DESC with GROUP BY.
To make it easier for users who come from other SQL environments, MySQL supports aliases for many functions. For example, all string functions support both ANSI SQL syntax and ODBC syntax.
MySQL understands the || and && operators to mean logical OR and AND, as in the C programming language. In MySQL, || and OR are synonyms, as are && and AND. Because of this nice syntax, MySQL doesn't support the ANSI SQL || operator for string concatenation; use CONCAT() instead. Because CONCAT() takes any number of arguments, it's easy to convert use of the || operator to MySQL.
CREATE DATABASE or DROP DATABASE.
The % operator is a synonym for MOD(). That is, N % M is equivalent to MOD(N,M). % is supported for C programmers and for compatibility with PostgreSQL.
The =, <>, <= ,<,>=,>, <<,>>, <=>, AND, OR, or LIKE operators may be used in column comparisons to the left of the FROM in SELECT statements. For example:
mysql> SELECT col1=1 AND col2=2 FROM tbl_name;


The LAST_INSERT_ID() function.
The REGEXP and NOT REGEXP extended regular expression operators.
CONCAT() or CHAR() with one argument or more than two arguments. (In MySQL, these functions can take any number of arguments.)
The BIT_COUNT(), CASE, ELT(), FROM_DAYS(), FORMAT(), IF(), PASSWORD(), ENCRYPT(), md5(), ENCODE(), DECODE(), PERIOD_ADD(), PERIOD_DIFF(), TO_DAYS(), or WEEKDAY() functions.
Use of TRIM() to trim substrings. ANSI SQL only supports removal of single characters.
The GROUP BY functions STD(), BIT_OR(), and BIT_AND().
Use of REPLACE instead of DELETE + INSERT.
The FLUSH flush_option statement.
The possiblity to set variables in a statement with :=:
SELECT @a:=SUM(total),@b=COUNT(*),@a/@b AS avg FROM test_table;
SELECT @t1:=(@t2:=1)+@t3:=4,@t1,@t2,@t3;

MySQL - Running MySQL in ANSI Mode
If you start mysqld with the --ansi option, the following behavior of MySQL changes:

|| is string concatenation instead of OR.
You can have any number of spaces between a function name and the `('. This forces all function names to be treated as reserved words.
`"' will be an identifier quote character (like the MySQL ``' quote character) and not a string quote character. REAL will be a synonym for FLOAT instead of a synonym of DOUBLE.
5.3 MySQL Differences Compared to ANSI SQL92
We try to make MySQL follow the ANSI SQL standard and the ODBC SQL standard, but in some cases MySQL does some things differently:

-- is only a comment if followed by a white space.
For VARCHAR columns, trailing spaces are removed when the value is stored.
In some cases, CHAR columns are silently changed to VARCHAR columns.
Privileges for a table are not automatically revoked when you delete a table. You must explicitly issue a REVOKE to revoke privileges for a table.
NULL AND FALSE will evaluate to NULL and not to FALSE. This is because we don't think it's good to have to evaluate a lot of extra conditions in this case.

MySQL - Functionality Missing from MySQL
The following functionality is missing in the current version of MySQL. For a prioritized list indicating when new extensions may be added to MySQL, you should consult the online MySQL TODO list. That is the latest version of the TODO list in this manual.

MySQL - Sub-selects
The following will not yet work in MySQL:

SELECT * FROM table1 WHERE id IN (SELECT id FROM table2);
SELECT * FROM table1 WHERE id NOT IN (SELECT id FROM table2);
SELECT * FROM table1 WHERE NOT EXISTS (SELECT id FROM table2 where table1.id=table2.id);


However, in many cases you can rewrite the query without a sub-select:


SELECT table1.* FROM table1,table2 WHERE table1.id=table2.id;
SELECT table1.* FROM table1 LEFT JOIN table2 ON table1.id=table2.id where table2.id IS NULL

For more complicated subqueries you can often create temporary tables to hold the subquery. In some cases, however this option will not work. The most frequently encountered of these cases arises with DELETE statements, for which standard SQL does not support joins (except in sub-selects). For this situation there are two options available until subqueries are supported by MySQL.

The first option is to use a procedural programming language (such as Perl or PHP) to submit a SELECT query to obtain the primary keys for the records to be deleted, and then use these values to construct the DELETE statement (DELETE FROM ... WHERE ... IN (key1, key2, ...)).

The second option is to use interactive SQL to contruct a set of DELETE statements automatically, using the MySQL extension CONCAT() (in lieu of the standard || operator). For example:

SELECT CONCAT('DELETE FROM tab1 WHERE pkid = ', tab1.pkid, ';')
FROM tab1, tab2
WHERE tab1.col1 = tab2.col2;

You can place this query in a script file and redirect input from it to the mysql command-line interpreter, piping its output back to a second instance of the interpreter:

prompt> mysql --skip-column-names mydb > myscript.sql | mysql mydb

MySQL only supports INSERT ... SELECT ... and REPLACE ... SELECT ... Independent sub-selects will probably be available in Version 4।0. You can now use the function IN() in other contexts, however.

MySQL - SELECT INTO TABLE
MySQL doesn't yet support the Oracle SQL extension: SELECT ... INTO TABLE .... MySQL supports instead the ANSI SQL syntax INSERT INTO ... SELECT ..., which is basically the same thing.

Alternatively, you can use SELECT INTO OUTFILE... or CREATE TABLE ... SELECT to solve your problem.

MySQL - Transactions
As MySQL does nowadays support transactions, the following discussion is only valid if you are only using the non-transaction-safe table types.
The question is often asked, by the curious and the critical, ``Why is MySQL not a transactional database?'' or ``Why does MySQL not support transactions?''
MySQL has made a conscious decision to support another paradigm for data integrity, ``atomic operations.'' It is our thinking and experience that atomic operations offer equal or even better integrity with much better performance. We, nonetheless, appreciate and understand the transactional database paradigm and plan, within the next few releases, to introduce transaction-safe tables on a per table basis. We will be giving our users the possibility to decide if they need the speed of atomic operations or if they need to use transactional features in their applications.
How does one use the features of MySQL to maintain rigorous integrity and how do these features compare with the transactional paradigm?
First, in the transactional paradigm, if your applications are written in a way that is dependent on the calling of ``rollback'' instead of ``commit'' in critical situations, then transactions are more convenient. Moreover, transactions ensure that unfinished updates or corrupting activities are not committed to the database; the server is given the opportunity to do an automatic rollback and your database is saved.
MySQL, in almost all cases, allows you to solve for potential problems by including simple checks before updates and by running simple scripts that check the databases for inconsistencies and automatically repair or warn if such occurs. Note that just by using the MySQL log or even adding one extra log, one can normally fix tables perfectly with no data integrity loss.
Moreover, fatal transactional updates can be rewritten to be atomic. In fact,we will go so far as to say that all integrity problems that transactions solve can be done with LOCK TABLES or atomic updates, ensuring that you never will get an automatic abort from the database, which is a common problem with transactional databases.
Not even transactions can prevent all loss if the server goes down. In such cases even a transactional system can lose data. The difference between different systems lies in just how small the time-lap is where they could lose data. No system is 100% secure, only ``secure enough.'' Even Oracle, reputed to be the safest of transactional databases, is reported to sometimes lose data in such situations.
To be safe with MySQL, you only need to have backups and have the update logging turned on. With this you can recover from any situation that you could with any transactional database. It is, of course, always good to have backups, independent of which database you use.
The transactional paradigm has its benefits and its drawbacks. Many users and application developers depend on the ease with which they can code around problems where an abort appears to be, or is necessary, and they may have to do a little more work with MySQL to either think differently or write more. If you are new to the atomic operations paradigm, or more familiar or more comfortable with transactions, do not jump to the conclusion that MySQL has not addressed these issues. Reliability and integrity are foremost in our minds. Recent estimates indicate that there are more than 1,000,000 mysqld servers currently running, many of which are in production environments. We hear very, very seldom from our users that they have lost any data, and in almost all of those cases user error is involved. This is, in our opinion, the best proof of MySQL's stability and reliability.
Lastly, in situations where integrity is of highest importance, MySQL's current features allow for transaction-level or better reliability and integrity. If you lock tables with LOCK TABLES, all updates will stall until any integrity checks are made. If you only obtain a read lock (as opposed to a write lock), then reads and inserts are still allowed to happen. The new inserted records will not be seen by any of the clients that have a READ lock until they release their read locks. With INSERT DELAYED you can queue inserts into a local queue, until the locks are released, without having the client wait for the insert to complete.
``Atomic,'' in the sense that we mean it, is nothing magical. It only means that you can be sure that while each specific update is running, no other user can interfere with it, and there will never be an automatic rollback (which can happen on transaction based systems if you are not very careful). MySQL also guarantees that there will not be any dirty reads. You can find some example of how to write atomic updates in the commit-rollback section.
We have thought quite a bit about integrity and performance, and we believe that our atomic operations paradigm allows for both high reliability and extremely high performance, on the order of three to five times the speed of the fastest and most optimally tuned of transactional databases. We didn't leave out transactions because they are hard to do. The main reason we went with atomic operations as opposed to transactions is that by doing this we could apply many speed optimizations that would not otherwise have been possible.
Many of our users who have speed foremost in their minds are not at all concerned about transactions. For them transactions are not an issue. For those of our users who are concerned with or have wondered about transactions vis-a-vis MySQL, there is a ``MySQL way'' as we have outlined above. For those where safety is more important than speed, we recommend them to use the BDB tables for all their critical data.
One final note: We are currently working on a safe replication schema that we believe to be better than any commercial replication system we know of। This system will work most reliably under the atomic operations, non-transactional, paradigm. Stay tuned.

MySQL - Stored Procedures and Triggers
A stored procedure is a set of SQL commands that can be compiled and stored in the server. Once this has been done, clients don't need to keep reissuing the entire query but can refer to the stored procedure. This provides better performance because the query has to be parsed only once, and less information needs to be sent between the server and the client. You can also raise the conceptual level by having libraries of functions in the server.
A trigger is a stored procedure that is invoked when a particular event occurs. For example, you can install a stored procedure that is triggered each time a record is deleted from a transaction table and that automatically deletes the corresponding customer from a customer table when all his transactions are deleted.
The planned update language will be able to handle stored procedures, but without triggers. Triggers usually slow down everything, even queries for which they are not needed.

MySQL - Foreign Keys
Note that foreign keys in SQL are not used to join tables, but are used mostly for checking referential integrity (foreign key constraints). If you want to get results from multiple tables from a SELECT statement, you do this by joining tables:

SELECT * from table1,table2 where table1.id = table2.id;

The FOREIGN KEY syntax in MySQL exists only for compatibility with other SQL vendors' CREATE TABLE commands; it doesn't do anything. The FOREIGN KEY syntax without ON DELETE ... is mostly used for documentation purposes. Some ODBC applications may use this to produce automatic WHERE clauses, but this is usually easy to override. FOREIGN KEY is sometimes used as a constraint check, but this check is unnecessary in practice if rows are inserted into the tables in the right order. MySQL only supports these clauses because some applications require them to exist (regardless of whether or not they work).

In MySQL, you can work around the problem of ON DELETE ... not being implemented by adding the appropriate DELETE statement to an application when you delete records from a table that has a foreign key. In practice this is as quick (in some cases quicker) and much more portable than using foreign keys.

In the near future we will extend the FOREIGN KEY implementation so that at least the information will be saved in the table specification file and may be retrieved by mysqldump and ODBC. At a later stage we will implement the foreign key constraints for application that can't easily be coded to avoid them.

MySQL - Reasons NOT to Use Foreign Keys constraints
There are so many problems with foreign key constraints that we don't know where to start:
Foreign key constraints make life very complicated, because the foreign key definitions must be stored in a database and implementing them would destroy the whole ``nice approach'' of using files that can be moved, copied, and removed. The speed impact is terrible for INSERT and UPDATE statements, and in this case almost all FOREIGN KEY constraint checks are useless because you usually insert records in the right tables in the right order, anyway. There is also a need to hold locks on many more tables when updating one table, because the side effects can cascade through the entire database. It's MUCH faster to delete records from one table first and subsequently delete them from the other tables.
You can no longer restore a table by doing a full delete from the table and then restoring all records (from a new source or from a backup).
If you use foreign key constraints you can't dump and restore tables unless you do so in a very specific order. It's very easy to do ``allowed'' circular definitions that make the tables impossible to re-create each table with a single create statement, even if the definition works and is usable.
It's very easy to overlook FOREIGN KEY ... ON DELETE rules when one codes an application. It's not unusual that one loses a lot of important information just because a wrong or misused ON DELETE rule.
The only nice aspect of FOREIGN KEY is that it gives ODBC and some other client programs the ability to see how a table is connected and to use this to show connection diagrams and to help in building applicatons.

MySQL will soon store FOREIGN KEY definitions so that a client can ask for and receive an answer about how the original connection was made. The current `.frm' file format does not have any place for it. At a later stage we will implement the foreign key constraints for application that can't easily be coded to avoid them.

MySQL - `--' as the Start of a Comment
MySQL doesn't support views, but this is on the TODO.

MySQL - Views
Some other SQL databases use `--' to start comments. MySQL has `#' as the start comment character, even if the mysql command-line tool removes all lines that start with `--'. You can also use the C comment style /* this is a comment */ with MySQL.

MySQL Version 3.23.3 and above supports the `--' comment style only if the comment is followed by a space. This is because this degenerate comment style has caused many problems with automatically generated SQL queries that have used something like the following code, where we automatically insert the value of the payment for !payment!:

UPDATE tbl_name SET credit=credit-!payment!

What do you think will happen when the value of payment is negative?

Because 1--1 is legal in SQL, we think it is terrible that `--' means start comment.

In MySQL Version 3.23 you can, however, use: 1-- This is a comment

The following discussion only concerns you if you are running a MySQL version earlier than Version 3.23:

If you have a SQL program in a text file that contains `--' comments you should use:

shell> replace " --" " #" <> mysql database <> replace " --" " #" -- text-file-with-funny-comments.sql

Change them back with this command:

shell> replace " #" " --" -- text-file-with-funny-comments.सकल

MySQL - How to Cope Without COMMIT/ROLLBACK
The following mostly applies only for ISAM, MyISAM, and HEAP tables. If you only use transaction-safe tables (BDB tables) in an a update, you can do COMMIT and ROLLBACK also with MySQL.
The problem with handling COMMIT-ROLLBACK efficiently with the above table types would require a completely different table layout than MySQL uses today. The table type would also need extra threads that do automatic cleanups on the tables, and the disk usage would be much higher. This would make these table types about 2-4 times slower than they are today.
For the moment, we prefer implementing the SQL server language (something like stored procedures). With this you would very seldom really need COMMIT-ROLLBACK. This would also give much better performance.
Loops that need transactions normally can be coded with the help of LOCK TABLES, and you don't need cursors when you can update records on the fly.
We at TcX had a greater need for a real fast database than a 100% general database. Whenever we find a way to implement these features without any speed loss, we will probably do it. For the moment, there are many more important things to do. Check the TODO for how we prioritize things at the moment. (Customers with higher levels of support can alter this, so things may be reprioritized.)
The current problem is actually ROLLBACK. Without ROLLBACK, you can do any kind of COMMIT action with LOCK TABLES. To support ROLLBACK with the above table types, MySQL would have to be changed to store all old records that were updated and revert everything back to the starting point if ROLLBACK was issued. For simple cases, this isn't that hard to do (the current isamlog could be used for this purpose), but it would be much more difficult to implement ROLLBACK for ALTER/DROP/CREATE TABLE.
To avoid using ROLLBACK, you can use the following strategy:
Use LOCK TABLES ... to lock all the tables you want to access.
Test conditions.
Update if everything is okay.
Use UNLOCK TABLES to release your locks.
This is usually a much faster method than using transactions with possible ROLLBACKs, although not always. The only situation this solution doesn't handle is when someone kills the threads in the middle of an update. In this case, all locks will be released but some of the updates may not have been executed.

You can also use functions to update records in a single operation. You can get a very efficient application by using the following techniques:

Modify fields relative to their current value.
Update only those fields that actually have changed.
For example, when we are doing updates to some customer information, we update only the customer data that has changed and test only that none of the changed data, or data that depend on the changed data, has changed compared to the original row. The test for changed data is done with the WHERE clause in the UPDATE statement. If the record wasn't updated, we give the client a message: "Some of the data you have changed have been changed by another user". Then we show the old row versus the new row in a window, so the user can decide which version of the customer record he should use.

This gives us something that is similar to column locking but is actually even better, because we only update some of the columns, using values that are relative to their current values. This means that typical UPDATE statements look something like these:

UPDATE tablename SET pay_back=pay_back+'relative change';

UPDATE customer
SET
customer_date='current_date',
address='new address',
phone='new phone',
money_he_owes_us=money_he_owes_us+'new_money'
WHERE
customer_id=id AND address='old address' AND phone='old phone';

As you can see, this is very efficient and works even if another client has changed the values in the pay_back or money_he_owes_us columns.

In many cases, users have wanted ROLLBACK and/or LOCK TABLES for the purpose of managing unique identifiers for some tables. This can be handled much more efficiently by using an AUTO_INCREMENT column and either the SQL function LAST_INSERT_ID() or the C API function mysql_insert_id().

At MySQL AB, we have never had any need for row-level locking because we have always been able to code around it. Some cases really need row locking, but they are very few. If you want row-level locking, you can use a flag column in the table and do something like this:

UPDATE tbl_name SET row_flag=1 WHERE id=ID;

MySQL returns 1 for the number of affected rows if the row was found and row_flag wasn't already 1 in the original row.
You can think of it as MySQL changed the above query to:

UPDATE tbl_name SET row_flag=1 WHERE id=ID and row_flag <> 1;

MySQL - General Security
Anyone using MySQL on a computer connected to the Internet should read this section to avoid the most common security mistakes.

In discussing security, we emphasize the necessity of fully protecting the entire server host (not simply the MySQL server) against all types of applicable attacks: eavesdropping, altering, playback, and denial of service. We do not cover all aspects of availability and fault tolerance here.

MySQL uses Access Control Lists (ACLs) security for all connections, queries, and other operations that a user may attempt to perform. There is also some support for SSL-encrypted connections between MySQL clients and servers. Many of the concepts discussed here are not specific to MySQL at all; the same general ideas apply to almost all applications.

When running MySQL, follow these guidelines whenever possible:

DON'T EVER GIVE ANYONE (EXCEPT THE MySQL ROOT USER) ACCESS TO THE mysql.user TABLE! The encrypted password is the real password in MySQL. If you know this for one user you can easily login as him if you have access to his 'host'.
Learn the MySQL access privilege system. The GRANT and REVOKE commands are used for restricting access to MySQL. Do not grant any more privileges than necessary. Never grant privileges to all hosts. Checklist:
Try mysql -u root. If you are able to connect successfully to the server without being asked for a password, you have problems. Any user (not just root) can connect to your MySQL server with full privileges! Review the MySQL installation instructions, paying particular attention to the item about setting a root password.
Use the command SHOW GRANTS and check to see who has access to what. Remove those privileges that are not necessary using the REVOKE command.
Do not keep any plain-text passwords in your database. When your computer becomes compromised, the intruder can take the full list of passwords and use them. Instead use MD5() or another one-way hashing function.
Do not use passwords from dictionaries. There are special programs to break them. Even passwords like ``xfish98'' are very bad. Much better is ``duag98'' which contains the same word ``fish'' but typed one key to the left on a standard QWERTY keyboard. Another method is to use ``Mhall'' which is taken from the first characters of of each word in the sentence ``Mary had a little lamb.'' This is easy to remember and type, but hard to guess for someone who does not know it.
Invest in a firewall. This protects from at least 50% of all types of exploits in any software. Put MySQL behind the firewall or in a demilitarized zone (DMZ). Checklist:
Try to scan your ports from the Internet using a tool such as nmap. MySQL uses port 3306 by default. This port should be inaccessible from untrusted hosts. Another simple way to check whether or not your MySQL port is open is to type telnet server_host 3306 from some remote machine, where server_host is the hostname of your MySQL server. If you get a connection and some garbage characters, the port is open, and should be closed on your firewall or router, unless you really have a good reason to keep it open. If telnet just hangs, everything is OK, the port is blocked.
Do not trust any data entered by your users. They can try to trick your code by entering special or escaped character sequences in Web forms, URLs, or whatever application you have built. Be sure that your application remains secure if a user enters something like ``; DROP DATABASE mysql;''. This is an extreme example, but large security leaks and data loss may occur as a result of hackers using similar techniques, if you do not prepare for them. Also remember to check numeric data. A common mistake is to protect only strings. Sometimes people think that if a database contains only publicly available data that it need not be protected. This is incorrect. At least denial-of-service type attacks can be performed on such databases. The simplest way to protect from this type of attack is to use apostrophes around the numeric constants: SELECT * FROM table WHERE ID='234' instead of SELECT * FROM table WHERE ID=234. MySQL automatically converts this string to a number and strips all non-numeric symbols from it. Checklist:
All WWW applications:
Try to enter `'' and `"' in all your Web forms. If you get any kind of MySQL error, investigate the problem right away.
Try to modify any dynamic URLs by adding %22 (`"'), %23 (`#'), and %27 (`'') in the URL.
Try to modify datatypes in dynamic URLs from numeric ones to character ones containing characters from previous examples.
Your application should be safe against this and similar attacks.
Try to enter characters, spaces, and special symbols instead of numbers in numeric fields. Your application should remove them before passing them to MySQL or your application should generate an error. Passing unchecked values to MySQL is very dangerous!
Check data sizes before passing them to MySQL.
Consider having your application connect to the database using a different user name than the one you use for administrative purposes. Do not give your applications any more access privileges than they need.
Users of PHP:
Check out the addslashes() function.
Users of MySQL C API:
Check out the mysql_escape() API call.
Users of MySQL++:
Check out the escape and quote modifiers for query streams.
Users of Perl DBI:
Check out the quote() method.
Do not transmit plain (unencrypted) data over the Internet. These data are accessible to everyone who has the time and ability to intercept it and use it for their own purposes. Instead, use an encrypted protocol such as SSL or SSH. MySQL supports internal SSL connections as of Version 3.23.9. SSH port-forwarding can be used to create an encrypted (and compressed) tunnel for the communication.
Learn to use the tcpdump and strings utilities. For most cases, you can check whether or not MySQL data streams are unencrypted by issuing a command like the following:
shell> tcpdump -l -i eth0 -w - src or dst port 3306 | strings

(This works under Linux and should work with small modifications under other systems)। Warning: If you do not see data this doesn't always actually mean that it is encrypted. If you need high security, you should consult with a security expert.

How to Make MySQL Secure Against Crackers ?
When you connect to a MySQL server, you normally should use a password. The password is not transmitted in clear text over the connection, however the encryption algorithm is not very strong, and with some effort a clever attacker can crack the password if he is able to sniff the traffic between the client and the server. If the connection between the client and the server goes through an untrusted network, you should use an SSH tunnel to encrypt the communication.
All other information is transferred as text that can be read by anyone who is able to watch the connection. If you are concerned about this, you can use the compressed protocol (in MySQL Version 3.22 and above) to make things much harder. To make things even more secure you should use ssh (see http://www.cs.hut.fi/ssh). With this, you can get an encrypted TCP/IP connection between a MySQL server and a MySQL client.
To make a MySQL system secure, you should strongly consider the following suggestions:
Use passwords for all MySQL users. Remember that anyone can log in as any other person as simply as mysql -u other_user db_name if other_user has no password. It is common behavior with client/server applications that the client may specify any user name. You can change the password of all users by editing the mysql_install_db script before you run it, or only the password for the MySQL root user like this:
shell> mysql -u root mysql
mysql> UPDATE user SET Password=PASSWORD('new_password')
WHERE user='root';
mysql> FLUSH PRIVILEGES;

Don't run the MySQL daemon as the Unix root user. It is very dangerous as any user with FILE privileges will be able to create files as root (for example, ~root/.bashrc). To prevent this mysqld will refuse to run as root unless it is specified directly via --user=root option. mysqld can be run as any user instead. You can also create a new Unix user mysql to make everything even more secure. If you run mysqld as another Unix user, you don't need to change the root user name in the user table, because MySQL user names have nothing to do with Unix user names. You can edit the mysql.server script to start mysqld as another Unix user. Normally this is done with the su command.
If you put a password for the Unix root user in the mysql.server script, make sure this script is readable only by root. Check that the Unix user that mysqld runs as is the only user with read/write privileges in the database directories. On Unix platforms, do not run mysqld as root unless you really need to. Consider creating a user named mysql for that purpose.
Don't give the process privilege to all users. The output of mysqladmin processlist shows the text of the currently executing queries, so any user who is allowed to execute that command might be able to see if another user issues an UPDATE user SET password=PASSWORD('not_secure') query. mysqld reserves an extra connection for users who have the process privilege, so that a MySQL root user can log in and check things even if all normal connections are in use. Don't give the file privilege to all users. Any user that has this privilege can write a file anywhere in the file system with the privileges of the mysqld daemon! To make this a bit safer, all files generated with SELECT ... INTO OUTFILE are readable to everyone, and you can't overwrite existing files. The file privilege may also be used to read any file accessible to the Unix user that the server runs as. This could be abused, for example, by using LOAD DATA to load `/etc/passwd' into a table, which can then be read with SELECT.
If you don't trust your DNS, you should use IP numbers instead of hostnames in the grant tables. In principle, the --secure option to mysqld should make hostnames safe. In any case, you should be very careful about creating grant table entries using hostname values that contain wild cards!
If you want to restrict the number of connections for a single user, you can do this by setting the max_user_connections variable in mysqld.

MySQL - Startup options to mysqld which concerns security ?
The following mysqld options affect networking security:
--secure
IP numbers returned by the gethostbyname() system call are checked to make sure they resolve back to the original hostname. This makes it harder for someone on the outside to get access by pretending to be another host. This option also adds some sanity checks of hostnames. The option is turned off by default in MySQL Version 3.21 because sometimes it takes a long time to perform backward resolutions. MySQL Version 3.22 caches hostnames and has this option enabled by default.
--skip-grant-tables
This option causes the server not to use the privilege system at all. This gives everyone full access to all databases! (You can tell a running server to start using the grant tables again by executing mysqladmin flush-privileges or mysqladmin reload.)
--skip-name-resolve
Hostnames are not resolved. All Host column values in the grant tables must be IP numbers or localhost.
--skip-networking
Don't allow TCP/IP connections over the network. All connections to mysqld must be made via Unix sockets. This option is unsuitable for systems that use MIT-pthreads, because the MIT-pthreads package doesn't support Unix sockets.
--skip-show-database
SHOW DATABASE command doesn't return anything.
--safe-show-database
SHOW DATABASE only returns databases for which the user have some kind of privilege।

MySQL - What the Privilege System Does
The primary function of the MySQL privilege system is to authenticate a user connecting from a given host, and to associate that user with privileges on a database such as select, insert, update and delete.
Additional functionality includes the ability to have an anonymous user and to grant privileges for MySQL-specific functions such as LOAD DATA INFILE and administrative operations.

MySQL User Names and Passwords
There are several distinctions between the way user names and passwords are used by MySQL and the way they are used by Unix or Windows:
User names, as used by MySQL for authentication purposes, have nothing to do with Unix user names (login names) or Windows user names. Most MySQL clients by default try to log in using the current Unix user name as the MySQL user name, but that is for convenience only. Client programs allow a different name to be specified with the -u or --user options. This means that you can't make a database secure in any way unless all MySQL user names have passwords. Anyone may attempt to connect to the server using any name, and they will succeed if they specify any name that doesn't have a password. MySQL user names can be up to 16 characters long; Unix user names typically are limited to 8 characters. MySQL passwords have nothing to do with Unix passwords. There is no necessary connection between the password you use to log in to a Unix machine and the password you use to access a database on that machine. MySQL encrypts passwords using a different algorithm than the one used during the Unix login process.
Note that even if the password is
stored 'scrambled', and knowing your 'scrambled' password is enough to be able to connect to the MySQL server!

Connecting to the MySQL Server
MySQL client programs generally require that you specify connection parameters when you want to access a MySQL server: the host you want to connect to, your user name, and your password. For example, the mysql client can be started like this (optional arguments are enclosed between `[' and `]'):

shell> mysql [-h host_name] [-u user_name] [-pyour_pass]

Alternate forms of the -h, -u, and -p options are --host=host_name, --user=user_name, and --password=your_pass. Note that there is no space between -p or --password= and the password following it.

NOTE: Specifying a password on the command line is not secure! Any user on your system may then find out your password by typing a command like: ps auxww.

mysql uses default values for connection parameters that are missing from the command line:

The default hostname is localhost.
The default user name is your Unix login name.
No password is supplied if -p is missing.
Thus, for a Unix user joe, the following commands are equivalent:

shell> mysql -h localhost -u joe
shell> mysql -h localhost
shell> mysql -u joe
shell> mysql

Other MySQL clients behave similarly.

On Unix systems, you can specify different default values to be used when you make a connection, so that you need not enter them on the command line each time you invoke a client program. This can be done in a couple of ways:

You can specify connection parameters in the [client] section of the `.my.cnf' configuration file in your home directory. The relevant section of the file might look like this:
[client]
host=host_name
user=user_name
password=your_pass

You can specify connection parameters using environment variables. The host can be specified for mysql using MYSQL_HOST. The MySQL user name can be specified using USER (this is for Windows only). The password can be specified using MYSQL_PWD (but this is insecure; see the next section).

MySQL - Keeping Your Password Secure
It is inadvisable to specify your password in a way that exposes it to discovery by other users. The methods you can use to specify your password when you run client programs are listed below, along with an assessment of the risks of each method:
Never give a normal user access to the mysql.user table. Knowing the encrypted password for a user makes it possible to login as this user. The passwords are only scrambled so that one shouldn't be able to see the real password you used (if you happen to use a similar password with your other applications).
Use a -pyour_pass or --password=your_pass option on the command line. This is convenient but insecure, because your password becomes visible to system status programs (such as ps) that may be invoked by other users to display command lines. (MySQL clients typically overwrite the command-line argument with zeroes during their initialization sequence, but there is still a brief interval during which the value is visible.)
Use a -p or --password option (with no your_pass value specified). In this case, the client program solicits the password from the terminal:
shell> mysql -u user_name -p
Enter password: ********

The `*' characters represent your password. It is more secure to enter your password this way than to specify it on the command line because it is not visible to other users. However, this method of entering a password is suitable only for programs that you run interactively. If you want to invoke a client from a script that runs non-interactively, there is no opportunity to enter the password from the terminal. On some systems, you may even find that the first line of your script is read and interpreted (incorrectly) as your password!
Store your password in a configuration file. For example, you can list your password in the [client] section of the `.my.cnf' file in your home directory:
[client]
password=your_pass

If you store your password in `.my.cnf', the file should not be group or world readable or writable. Make sure the file's access mode is 400 or 600.
You can store your password in the MYSQL_PWD environment variable, but this method must be considered extremely insecure and should not be used. Some versions of ps include an option to display the environment of running processes; your password will be in plain sight for all to see if you set MYSQL_PWD. Even on systems without such a version of ps, it is unwise to assume there is no other method to observe process environments.
All in all, the safest methods are to have the client program prompt for the password or to specify the password in a properly protected `।my.cnf' file.


My SQL Interview Questions and Answers

Privileges Provided by MySQL
Information about user privileges is stored in the user, db, host, tables_priv, and columns_priv tables in the mysql database (that is, in the database named mysql).

The names used in this manual to refer to the privileges provided by MySQL are shown below, along with the table column name associated with each privilege in the grant tables and the context in which the privilege applies:

Privilege Column Context
select Select_priv tables
insert Insert_priv tables
update Update_priv tables
delete Delete_priv tables
index Index_priv tables
alter Alter_priv tables
create Create_priv databases, tables, or indexes
drop Drop_priv databases or tables
grant Grant_priv databases or tables
references References_priv databases or tables
reload Reload_priv server administration
shutdown Shutdown_priv server administration
process Process_priv server administration
file File_priv file access on server

The select, insert, update, and delete privileges allow you to perform operations on rows in existing tables in a database.
SELECT statements require the select privilege only if they actually retrieve rows from a table. You can execute certain SELECT statements even without permission to access any of the databases on the server. For example, you could use the mysql client as a simple calculator:

mysql> SELECT 1+1;
mysql> SELECT PI()*2;

The index privilege allows you to create or drop (remove) indexes.

The alter privilege allows you to use ALTER TABLE.

The create and drop privileges allow you to create new databases and tables, or to drop (remove) existing databases and tables.

Note that if you grant the drop privilege for the mysql database to a user, that user can drop the database in which the MySQL access privileges are stored!

The grant privilege allows you to give to other users those privileges you yourself possess.

The file privilege gives you permission to read and write files on the server using the LOAD DATA INFILE and SELECT ... INTO OUTFILE statements. Any user to whom this privilege is granted can read or write any file that the MySQL server can read or write.

The remaining privileges are used for administrative operations, which are performed using the mysqladmin program. The table below shows which mysqladmin commands each administrative privilege allows you to execute:

Privilege Commands permitted to privilege holders reload reload, refresh, flush-privileges, flush-hosts, flush-logs, and flush-tables
shutdown shutdown
process processlist, kill

The reload command tells the server to re-read the grant tables. The refresh command flushes all tables and opens and closes the log files. flush-privileges is a synonym for reload. The other flush-* commands perform functions similar to refresh but are more limited in scope, and may be preferable in some instances. For example, if you want to flush just the log files, flush-logs is a better choice than refresh.

The shutdown command shuts down the server.

The processlist command displays information about the threads executing within the server. The kill command kills server threads. You can always display or kill your own threads, but you need the process privilege to display or kill threads initiated by other users.

It is a good idea in general to grant privileges only to those users who need them, but you should exercise particular caution in granting certain privileges:

The grant privilege allows users to give away their privileges to other users. Two users with different privileges and with the grant privilege are able to combine privileges.
The alter privilege may be used to subvert the privilege system by renaming tables.
The file privilege can be abused to read any world-readable file on the server into a database table, the contents of which can then be accessed using SELECT. This includes the contents of all databases hosted by the server!
The shutdown privilege can be abused to deny service to other users entirely, by terminating the server.
The process privilege can be used to view the plain text of currently executing queries, including queries that set or change passwords.
Privileges on the mysql database can be used to change passwords and other access privilege information. (Passwords are stored encrypted, so a malicious user cannot simply read them to know the plain text password). If they can access the mysql.user password column, they can use it to log into the MySQL server for the given user. (With sufficient privileges, the same user can replace a password with a different one.)
There are some things that you cannot do with the MySQL privilege system:

You cannot explicitly specify that a given user should be denied access. That is, you cannot explicitly match a user and then refuse the connection. You cannot specify that a user has privileges to create or drop tables in a database but not to create or drop the database itself.

MySQL - How the Privilege System Works
The MySQL privilege system ensures that all users may do exactly the things that they are supposed to be allowed to do. When you connect to a MySQL server, your identity is determined by the host from which you connect and the user name you specify. The system grants privileges according to your identity and what you want to do.

MySQL considers both your hostname and user name in identifying you because there is little reason to assume that a given user name belongs to the same person everywhere on the Internet. For example, the user bill who connects from whitehouse.gov need not be the same person as the user bill who connects from microsoft.com. MySQL handles this by allowing you to distinguish users on different hosts that happen to have the same name: you can grant bill one set of privileges for connections from whitehouse.gov, and a different set of privileges for connections from microsoft.com.

MySQL access control involves two stages:

Stage 1: The server checks whether or not you are even allowed to connect.
Stage 2: Assuming you can connect, the server checks each request you issue to see whether or not you have sufficient privileges to perform it. For example, if you try to select rows from a table in a database or drop a table from the database, the server makes sure you have the select privilege for the table or the drop privilege for the database. The server uses the user, db, and host tables in the mysql database at both stages of access control. The fields in these grant tables are shown below:

Table name user db host
Scope fields Host Host Host
User Db Db
Password User
Privilege fields Select_priv Select_priv Select_priv
Insert_priv Insert_priv Insert_priv
Update_priv Update_priv Update_priv
Delete_priv Delete_priv Delete_priv
Index_priv Index_priv Index_priv
Alter_priv Alter_priv Alter_priv
Create_priv Create_priv Create_priv
Drop_priv Drop_priv Drop_priv
Grant_priv Grant_priv Grant_priv
References_priv
Reload_priv
Shutdown_priv
Process_priv
File_priv

For the second stage of access control (request verification), the server may, if the request involves tables, additionally consult the tables_priv and columns_priv tables. The fields in these tables are shown below:

Table name tables_priv columns_priv
Scope fields Host Host
Db Db
User User
Table_name Table_name
Column_name
Privilege fields Table_priv Column_priv
Column_priv
Other fields Timestamp Timestamp
Grantor

Each grant table contains scope fields and privilege fields.

Scope fields determine the scope of each entry in the tables, that is, the context in which the entry applies. For example, a user table entry with Host and User values of 'thomas.loc.gov' and 'bob' would be used for authenticating connections made to the server by bob from the host thomas.loc.gov. Similarly, a db table entry with Host, User, and Db fields of 'thomas.loc.gov', 'bob' and 'reports' would be used when bob connects from the host thomas.loc.gov to access the reports database. The tables_priv and columns_priv tables contain scope fields indicating tables or table/column combinations to which each entry applies.

For access-checking purposes, comparisons of Host values are case insensitive. User, Password, Db, and Table_name values are case sensitive. Column_name values are case insensitive in MySQL Version 3.22.12 or later.

Privilege fields indicate the privileges granted by a table entry, that is, what operations can be performed. The server combines the information in the various grant tables to form a complete description of a user's privileges.

Scope fields are strings, declared as shown below; the default value for each is the empty string:

Field name Type
Host CHAR(60)
User CHAR(16)
Password CHAR(16)
Db CHAR(64) (CHAR(60) for the tables_priv and columns_priv tables)
Table_name CHAR(60)
Column_name CHAR(60)

In the user, db and host tables, all privilege fields are declared as ENUM('N','Y') -- each can have a value of 'N' or 'Y', and the default value is 'N'.

In the tables_priv and columns_priv tables, the privilege fields are declared as SET fields:

Table name Field name Possible set elements
tables_priv Table_priv 'Select', 'Insert', 'Update', 'Delete', 'Create', 'Drop', 'Grant', 'References', 'Index', 'Alter'
tables_priv Column_priv 'Select', 'Insert', 'Update', 'References'
columns_priv Column_priv 'Select', 'Insert', 'Update', 'References'

Briefly, the server uses the grant tables like this:

The user table scope fields determine whether to allow or reject incoming connections. For allowed connections, any privileges granted in the user table indicate the user's global (superuser) privileges. These privileges apply to all databases on the server.
The db and host tables are used together:
The db table scope fields determine which users can access which databases from which hosts. The privilege fields determine which operations are allowed.
The host table is used as an extension of the db table when you want a given db table entry to apply to several hosts. For example, if you want a user to be able to use a database from several hosts in your network, leave the Host value empty in the user's db table entry, then populate the host table with an entry for each of those hosts.

The tables_priv and columns_priv tables are similar to the db table, but are more fine-grained: they apply at the table and column levels rather than at the database level.
Note that administrative privileges (reload, shutdown, etc.) are specified only in the user table. This is because administrative operations are operations on the server itself and are not database-specific, so there is no reason to list such privileges in the other grant tables. In fact, only the user table need be consulted to determine whether or not you can perform an administrative operation.

The file privilege is specified only in the user table, too. It is not an administrative privilege as such, but your ability to read or write files on the server host is independent of the database you are accessing.

When you modify the contents of the grant tables, it is a good idea to make sure that your changes set up privileges the way you want.

A useful diagnostic tool is the mysqlaccess script, which Yves Carlier has provided for the MySQL distribution। Invoke mysqlaccess with the --help option to find out how it works. Note that mysqlaccess checks access using only the user, db and host tables. It does not check table- or column-level privileges.

MySQL - Access Control, Stage 1: Connection Verification
When you attempt to connect to a MySQL server, the server accepts or rejects the connection based on your identity and whether or not you can verify your identity by supplying the correct password. If not, the server denies access to you completely. Otherwise, the server accepts the connection, then enters Stage 2 and waits for requests.

Your identity is based on two pieces of information:

The host from which you connect
Your MySQL user name
Identity checking is performed using the three user table scope fields (Host, User, and Password). The server accepts the connection only if a user table entry matches your hostname and user name, and you supply the correct password.

Values in the user table scope fields may be specified as follows:

A Host value may be a hostname or an IP number, or 'localhost' to indicate the local host.
You can use the wild-card characters `%' and `_' in the Host field.
A Host value of '%' matches any hostname. A blank Host value is equivalent to '%'. Note that these values match any host that can create a connection to your server!
As of MySQL Version 3.23, for Host values specified as IP numbers, you can specify a netmask indicating how many address bits to use for the network number. For example:
GRANT ALL PRIVILEGES on db.* to david@'192.58.197.0/255.255.255.0';

This will allow everyone to connect from an IP where the following is true:
user_ip & netmask = host_ip.

In the above example all IP:s in the interval 192.58.197.0 - 192.58.197.255 can connect to the MySQL server.
Wild-card characters are not allowed in the User field, but you can specify a blank value, which matches any name. If the user table entry that matches an incoming connection has a blank user name, the user is considered to be the anonymous user (the user with no name), rather than the name that the client actually specified. This means that a blank user name is used for all further access checking for the duration of the connection (that is, during Stage 2).
The Password field can be blank. This does not mean that any password matches, it means the user must connect without specifying a password.
Non-blank Password values represent encrypted passwords. MySQL does not store passwords in plaintext form for anyone to see. Rather, the password supplied by a user who is attempting to connect is encrypted (using the PASSWORD() function). The encrypted password is then used when the client/server is checking if the password is correct (This is done without the encrypted password ever traveling over the connection.) Note that from MySQL's point of view the encrypted password is the REAL password, so you should not give anyone access to it! In particular, don't give normal users read access to the tables in the mysql database!

The examples below show how various combinations of Host and User values in user table entries apply to incoming connections:

Host value User value Connections matched by entry
'thomas.loc.gov' 'fred' fred, connecting from thomas.loc.gov
'thomas.loc.gov' '' Any user, connecting from thomas.loc.gov
'%' 'fred' fred, connecting from any host
'%' '' Any user, connecting from any host
'%.loc.gov' 'fred' fred, connecting from any host in the loc.gov domain
'x.y.%' 'fred' fred, connecting from x.y.net, x.y.com,x.y.edu, etc. (this is probably not useful)
'144.155.166.177' 'fred' fred, connecting from the host with IP address 144.155.166.177
'144.155.166.%' 'fred' fred, connecting from any host in the 144.155.166 class C subnet
'144.155.166.0/255.255.255.0' 'fred' Same as previous example

Because you can use IP wild-card values in the Host field (for example, '144.155.166.%' to match every host on a subnet), there is the possibility that someone might try to exploit this capability by naming a host 144.155.166.somewhere.com. To foil such attempts, MySQL disallows matching on hostnames that start with digits and a dot. Thus, if you have a host named something like 1.2.foo.com, its name will never match the Host column of the grant tables. Only an IP number can match an IP wild-card value.

An incoming connection may be matched by more than one entry in the user table. For example, a connection from thomas.loc.gov by fred would be matched by several of the entries just shown above. How does the server choose which entry to use if more than one matches? The server resolves this question by sorting the user table after reading it at startup time, then looking through the entries in sorted order when a user attempts to connect. The first matching entry is the one that is used.

user table sorting works as follows. Suppose the user table looks like this:

+-----------+----------+-
| Host | User | ...
+-----------+----------+-
| % | root | ...
| % | jeffrey | ...
| localhost | root | ...
| localhost | | ...
+-----------+----------+-

When the server reads in the table, it orders the entries with the most-specific Host values first ('%' in the Host column means ``any host'' and is least specific). Entries with the same Host value are ordered with the most-specific User values first (a blank User value means ``any user'' and is least specific). The resulting sorted user table looks like this:

+-----------+----------+-
| Host | User | ...
+-----------+----------+-
| localhost | root | ...
| localhost | | ...
| % | jeffrey | ...
| % | root | ...
+-----------+----------+-

When a connection is attempted, the server looks through the sorted entries and uses the first match found. For a connection from localhost by jeffrey, the entries with 'localhost' in the Host column match first. Of those, the entry with the blank user name matches both the connecting hostname and user name. (The '%'/'jeffrey' entry would have matched, too, but it is not the first match in the table.)

Here is another example. Suppose the user table looks like this:

+----------------+----------+-
| Host | User | ...
+----------------+----------+-
| % | jeffrey | ...
| thomas.loc.gov | | ...
+----------------+----------+-

The sorted table looks like this:

+----------------+----------+-
| Host | User | ...
+----------------+----------+-
| thomas.loc.gov | | ...
| % | jeffrey | ...
+----------------+----------+-
A connection from thomas.loc.gov by jeffrey is matched by the first entry, whereas a connection from whitehouse.gov by jeffrey is matched by the second.

A common misconception is to think that for a given user name, all entries that explicitly name that user will be used first when the server attempts to find a match for the connection. This is simply not true. The previous example illustrates this, where a connection from thomas.loc.gov by jeffrey is first matched not by the entry containing 'jeffrey' as the User field value, but by the entry with no user name!

If you have problems connecting to the server, print out the user table and sort it by hand to see where the first match is being made।

MySQL - Access Control, Stage 2: Request Verification
Once you establish a connection, the server enters Stage 2. For each request that comes in on the connection, the server checks whether you have sufficient privileges to perform it, based on the type of operation you wish to perform. This is where the privilege fields in the grant tables come into play. These privileges can come from any of the user, db, host, tables_priv, or columns_priv tables. The grant tables are manipulated with GRANT and REVOKE commands.
The user table grants privileges that are assigned to you on a global basis and that apply no matter what the current database is. For example, if the user table grants you the delete privilege, you can delete rows from any database on the server host! In other words, user table privileges are superuser privileges. It is wise to grant privileges in the user table only to superusers such as server or database administrators. For other users, you should leave the privileges in the user table set to 'N' and grant privileges on a database-specific basis only, using the db and host tables.
The db and host tables grant database-specific privileges. Values in the scope fields may be specified as follows:
The wild-card characters `%' and `_' can be used in the Host and Db fields of either table.
A '%' Host value in the db table means ``any host.'' A blank Host value in the db table means ``consult the host table for further information.''
A '%' or blank Host value in the host table means ``any host.''
A '%' or blank Db value in either table means ``any database.''
A blank User value in either table matches the anonymous user.
The db and host tables are read in and sorted when the server starts up (at the same time that it reads the user table). The db table is sorted on the Host, Db, and User scope fields, and the host table is sorted on the Host and Db scope fields. As with the user table, sorting puts the most-specific values first and least-specific values last, and when the server looks for matching entries, it uses the first match that it finds.

The tables_priv and columns_priv tables grant table- and column-specific privileges. Values in the scope fields may be specified as follows:

The wild-card characters `%' and `_' can be used in the Host field of either table.
A '%' or blank Host value in either table means ``any host.''
The Db, Table_name and Column_name fields cannot contain wild cards or be blank in either table.
The tables_priv and columns_priv tables are sorted on the Host, Db, and User fields. This is similar to db table sorting, although the sorting is simpler because only the Host field may contain wild cards.

The request verification process is described below. (If you are familiar with the access-checking source code, you will notice that the description here differs slightly from the algorithm used in the code. The description is equivalent to what the code actually does; it differs only to make the explanation simpler.)

For administrative requests (shutdown, reload, etc.), the server checks only the user table entry, because that is the only table that specifies administrative privileges. Access is granted if the entry allows the requested operation and denied otherwise. For example, if you want to execute mysqladmin shutdown but your user table entry doesn't grant the shutdown privilege to you, access is denied without even checking the db or host tables. (They contain no Shutdown_priv column, so there is no need to do so.)

For database-related requests (insert, update, etc.), the server first checks the user's global (superuser) privileges by looking in the user table entry. If the entry allows the requested operation, access is granted. If the global privileges in the user table are insufficient, the server determines the user's database-specific privileges by checking the db and host tables:

The server looks in the db table for a match on the Host, Db, and User fields. The Host and User fields are matched to the connecting user's hostname and MySQL user name. The Db field is matched to the database the user wants to access. If there is no entry for the Host and User, access is denied.
If there is a matching db table entry and its Host field is not blank, that entry defines the user's database-specific privileges.
If the matching db table entry's Host field is blank, it signifies that the host table enumerates which hosts should be allowed access to the database. In this case, a further lookup is done in the host table to find a match on the Host and Db fields. If no host table entry matches, access is denied. If there is a match, the user's database-specific privileges are computed as the intersection (not the union!) of the privileges in the db and host table entries, that is, the privileges that are 'Y' in both entries. (This way you can grant general privileges in the db table entry and then selectively restrict them on a host-by-host basis using the host table entries.)
After determining the database-specific privileges granted by the db and host table entries, the server adds them to the global privileges granted by the user table. If the result allows the requested operation, access is granted. Otherwise, the server checks the user's table and column privileges in the tables_priv and columns_priv tables and adds those to the user's privileges. Access is allowed or denied based on the result.

Expressed in boolean terms, the preceding description of how a user's privileges are calculated may be summarized like this:

global privileges
OR (database privileges AND host privileges)
OR table privileges
OR column privileges

It may not be apparent why, if the global user entry privileges are initially found to be insufficient for the requested operation, the server adds those privileges to the database-, table-, and column-specific privileges later. The reason is that a request might require more than one type of privilege. For example, if you execute an INSERT ... SELECT statement, you need both insert and select privileges. Your privileges might be such that the user table entry grants one privilege and the db table entry grants the other. In this case, you have the necessary privileges to perform the request, but the server cannot tell that from either table by itself; the privileges granted by the entries in both tables must be combined.

The host table can be used to maintain a list of secure servers.

At TcX, the host table contains a list of all machines on the local network. These are granted all privileges.

You can also use the host table to indicate hosts that are not secure. Suppose you have a machine public.your.domain that is located in a public area that you do not consider secure. You can allow access to all hosts on your network except that machine by using host table entries like this:


+--------------------+----+-
| Host | Db | ...
+--------------------+----+- | public.your.domain | % | ... (all privileges set to 'N')
| %.your.domain | % | ... (all privileges set to 'Y')
+--------------------+----+-

Naturally, you should always test your entries in the grant tables (for example, using mysqlaccess) to make sure your access privileges are actually set up the way you think they are।

MySQL - When Privilege Changes Take Effect
When mysqld starts, all grant table contents are read into memory and become effective at that point.
Modifications to the grant tables that you perform using GRANT, REVOKE, or SET PASSWORD are noticed by the server immediately.

If you modify the grant tables manually (using INSERT, UPDATE, etc.), you should execute a FLUSH PRIVILEGES statement or run mysqladmin flush-privileges or mysqladmin reload to tell the server to reload the grant tables. Otherwise your changes will have no effect until you restart the server. If you change the grant tables manually but forget to reload the privileges, you will be wondering why your changes don't seem to make any difference!

When the server notices that the grant tables have been changed, existing client connections are affected as follows:

Table and column privilege changes take effect with the client's next request.
Database privilege changes take effect at the next USE db_name command.
Global privilege changes and password changes take effect the next time the client connects.

Setting Up the Initial MySQL Privileges
After installing MySQL, you set up the initial access privileges by running scripts/mysql_install_db. The mysql_install_db script starts up the mysqld server, then initializes the grant tables to contain the following set of privileges:

The MySQL root user is created as a superuser who can do anything. Connections must be made from the local host. NOTE: The initial root password is empty, so anyone can connect as root without a password and be granted all privileges. An anonymous user is created that can do anything with databases that have a name of 'test' or starting with 'test_'. Connections must be made from the local host. This means any local user can connect without a password and be treated as the anonymous user.
Other privileges are denied. For example, normal users can't use mysqladmin shutdown or mysqladmin processlist.
NOTE: The default privileges are different for Windows.

Because your installation is initially wide open, one of the first things you should do is specify a password for the MySQL root user. You can do this as follows (note that you specify the password using the PASSWORD() function):

shell> mysql -u root mysql
mysql> UPDATE user SET Password=PASSWORD('new_password')
WHERE user='root';
mysql> FLUSH PRIVILEGES;

You can, in MySQL Version 3.22 and above, use the SET PASSWORD statement:

shell> mysql -u root mysql
mysql> SET PASSWORD FOR root=PASSWORD('new_password');

Another way to set the password is by using the mysqladmin command:

shell> mysqladmin -u root password new_password

Only users with write/update access to the mysql database can change the password for others users. All normal users (not anonymous ones) can only change their own password with either of the above commands or with SET PASSWORD=PASSWORD('new password').

Note that if you update the password in the user table directly using the first method, you must tell the server to re-read the grant tables (with FLUSH PRIVILEGES), because the change will go unnoticed otherwise.

Once the root password has been set, thereafter you must supply that password when you connect to the server as root.

You may wish to leave the root password blank so that you don't need to specify it while you perform additional setup or testing. However, be sure to set it before using your installation for any real production work.

See the scripts/mysql_install_db script to see how it sets up the default privileges. You can use this as a basis to see how to add other users.

If you want the initial privileges to be different than those just described above, you can modify mysql_install_db before you run it.

To re-create the grant tables completely, remove all the `.frm', `.MYI', and `.MYD' files in the directory containing the mysql database. (This is the directory named `mysql' under the database directory, which is listed when you run mysqld --help.) Then run the mysql_install_db script, possibly after editing it first to have the privileges you want.

NOTE: For MySQL versions older than Version 3।22.10, you should NOT delete the `.frm' files. If you accidentally do this, you should copy them back from your MySQL distribution before running mysql_install_db.

Adding New User Privileges to MySQL
ou can add users two different ways: by using GRANT statements or by manipulating the MySQL grant tables directly. The preferred method is to use GRANT statements, because they are more concise and less error-prone.

The examples below show how to use the mysql client to set up new users. These examples assume that privileges are set up according to the defaults described in the previous section. This means that to make changes, you must be on the same machine where mysqld is running, you must connect as the MySQL root user, and the root user must have the insert privilege for the mysql database and the reload administrative privilege. Also, if you have changed the root user password, you must specify it for the mysql commands below.

You can add new users by issuing GRANT statements:

shell> mysql --user=root mysql
mysqlgt; GRANT ALL PRIVILEGES ON *.* TO monty@localhost
IDENTIFIED BY 'some_pass' WITH GRANT OPTION;
mysqlgt; GRANT ALL PRIVILEGES ON *.* TO monty@"%"
IDENTIFIED BY 'some_pass' WITH GRANT OPTION;
mysqlgt; GRANT RELOAD,PROCESS ON *.* TO admin@localhost;
mysqlgt; GRANT USAGE ON *.* TO dummy@localhost;

These GRANT statements set up three new users:

monty
A full superuser who can connect to the server from anywhere, but who must use a password 'some_pass' to do so. Note that we must issue GRANT statements for both monty@localhost and monty@"%". If we don't add the entry with localhost, the anonymous user entry for localhost that is created by mysql_install_db will take precedence when we connect from the local host, because it has a more specific Host field value and thus comes earlier in the user table sort order.
admin
A user who can connect from localhost without a password and who is granted the reload and process administrative privileges. This allows the user to execute the mysqladmin reload, mysqladmin refresh, and mysqladmin flush-* commands, as well as mysqladmin process list . No database-related privileges are granted. (They can be granted later by issuing additional GRANT statements.)
dummy
A user who can connect without a password, but only from the local host. The global privileges are all set to 'N' -- the USAGE privilege type allows you to create a user with no privileges. It is assumed that you will grant database-specific privileges later.
You can also add the same user access information directly by issuing INSERT statements and then telling the server to reload the grant tables:

shellgt; mysql --user=root mysql
mysqlgt; INSERT INTO user VALUES('localhost','monty',PASSWORD('some_pass'),
'Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y')
mysqlgt; INSERT INTO user VALUES('%','monty',PASSWORD('some_pass'),
'Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y')
mysqlgt; INSERT INTO user SET Host='localhost',User='admin',
Reload_priv='Y', Process_priv='Y';
mysqlgt; INSERT INTO user (Host,User,Password)
VALUES('localhost','dummy','');
mysqlgt; FLUSH PRIVILEGES;

Depending on your MySQL version, you may have to use a different number of 'Y' values above (versions prior to Version 3.22.11 had fewer privilege columns). For the admin user, the more readable extended INSERT syntax that is available starting with Version 3.22.11 is used.

Note that to set up a superuser, you need only create a user table entry with the privilege fields set to 'Y'. No db or host table entries are necessary.

The privilege columns in the user table were not set explicitly in the last INSERT statement (for the dummy user), so those columns are assigned the default value of 'N'. This is the same thing that GRANT USAGE does.

The following example adds a user custom who can connect from hosts localhost, server.domain, and whitehouse.gov. He wants to access the bankaccount database only from localhost, the expenses database only from whitehouse.gov, and the customer database from all three hosts. He wants to use the password stupid from all three hosts.

To set up this user's privileges using GRANT statements, run these commands:

shellgt; mysql --user=root mysql
mysqlgt; GRANT SELECT,INSERT,UPDATE,DELETE,CREATE,DROP
ON bankaccount.*
TO custom@localhost
IDENTIFIED BY 'stupid';
mysqlgt; GRANT SELECT,INSERT,UPDATE,DELETE,CREATE,DROP
ON expenses.*
TO custom@whitehouse.gov
IDENTIFIED BY 'stupid';
mysqlgt; GRANT SELECT,INSERT,UPDATE,DELETE,CREATE,DROP
ON customer.*
TO custom@'%'
IDENTIFIED BY 'stupid';

To set up the user's privileges by modifying the grant tables directly, run these commands (note the FLUSH PRIVILEGES at the end):

shellgt; mysql --user=root mysql
mysqlgt; INSERT INTO user (Host,User,Password)
VALUES('localhost','custom',PASSWORD('stupid'));
mysqlgt; INSERT INTO user (Host,User,Password)
VALUES('server.domain','custom',PASSWORD('stupid'));
mysqlgt; INSERT INTO user (Host,User,Password)
VALUES('whitehouse.gov','custom',PASSWORD('stupid'));
mysqlgt; INSERT INTO db
(Host,Db,User,Select_priv,Insert_priv,Update_priv,Delete_priv,
Create_priv,Drop_priv)
VALUES
('localhost','bankaccount','custom','Y','Y','Y','Y','Y','Y');
mysqlgt; INSERT INTO db
(Host,Db,User,Select_priv,Insert_priv,Update_priv,Delete_priv,
Create_priv,Drop_priv)
VALUES
('whitehouse.gov','expenses','custom','Y','Y','Y','Y','Y','Y');
mysqlgt; INSERT INTO db
(Host,Db,User,Select_priv,Insert_priv,Update_priv,Delete_priv,
Create_priv,Drop_priv)
VALUES('%','customer','custom','Y','Y','Y','Y','Y','Y');
mysqlgt; FLUSH PRIVILEGES;

The first three INSERT statements add user table entries that allow user custom to connect from the various hosts with the given password, but grant no permissions to him (all privileges are set to the default value of 'N'). The next three INSERT statements add db table entries that grant privileges to custom for the bankaccount, expenses, and customer databases, but only when accessed from the proper hosts. As usual, when the grant tables are modified directly, the server must be told to reload them (with FLUSH PRIVILEGES) so that the privilege changes take effect.

If you want to give a specific user access from any machine in a given domain, you can issue a GRANT statement like the following:

mysql> GRANT ...
ON *.*
TO myusername@"%.mydomainname.com"
IDENTIFIED BY 'mypassword';

To do the same thing by modifying the grant tables directly, do this:

mysqlgt; INSERT INTO user VALUES ('%.mydomainname.com', 'myusername',
PASSWORD('mypassword'),...);
mysqlgt; FLUSH PRIVILEGES;

You can also use xmysqladmin, mysql_webadmin, and even xmysql to insert, change, and update values in the grant tables। You can find these utilities in the Contrib directory of the MySQL Website.

MySQL - Setting Up Passwords
In most cases you should use GRANT to set up your users/passwords, so the following only applies for advanced users.
The examples in the preceding sections illustrate an important principle: when you store a non-empty password using INSERT or UPDATE statements, you must use the PASSWORD() function to encrypt it. This is because the user table stores passwords in encrypted form, not as plaintext. If you forget that fact, you are likely to attempt to set passwords like this:

shell> mysql -u root mysql
mysql> INSERT INTO user (Host,User,Password)
VALUES('%','jeffrey','biscuit');
mysql> FLUSH PRIVILEGES;

The result is that the plaintext value 'biscuit' is stored as the password in the user table. When the user jeffrey attempts to connect to the server using this password, the mysql client encrypts it with PASSWORD() and sends the result to the server. The server compares the value in the user table (the encrypted value of 'biscuit') to the encrypted password (which is not 'biscuit'). The comparison fails and the server rejects the connection:

shell> mysql -u jeffrey -pbiscuit test
Access denied

Passwords must be encrypted when they are inserted in the user table, so the INSERT statement should have been specified like this instead:

mysql> INSERT INTO user (Host,User,Password)
VALUES('%','jeffrey',PASSWORD('biscuit'));

You must also use the PASSWORD() function when you use SET PASSWORD statements:

mysql> SET PASSWORD FOR jeffrey@"%" = PASSWORD('biscuit');

If you set passwords using the GRANT ... IDENTIFIED BY statement or the mysqladmin password command, the PASSWORD() function is unnecessary. They both take care of encrypting the password for you, so you would specify a password of 'biscuit' like this:

mysql> GRANT USAGE ON *.* TO jeffrey@"%" IDENTIFIED BY 'biscuit';

or

shell> mysqladmin -u jeffrey password biscuit

NOTE: PASSWORD() does not perform password encryption in the same way that Unix passwords are encrypted. You should not assume that if your Unix password and your MySQL password are the same, that PASSWORD() will result in the same encrypted value as is stored in the Unix password file.

MySQL - Causes of Access denied Errors
If you encounter Access denied errors when you try to connect to the MySQL server, the list below indicates some courses of action you can take to correct the problem:

The server should let you connect without error. You should also make sure you have a file `user.MYD' in the MySQL database directory. Ordinarily, this is `PATH/var/mysql/user.MYD', where PATH is the pathname to the MySQL installation root.
After a fresh installation, you should connect to the server and set up your users and their access permissions:
shell> mysql -u root mysql

The server should let you connect because the MySQL root user has no password initially. That is also a security risk, so setting the root password is something you should do while you're setting up your other MySQL users. If you try to connect as root and get this error:
Access denied for user: '@unknown' to database mysql

this means that you don't have an entry in the user table with a User column value of 'root' and that mysqld cannot resolve the hostname for your client. In this case, you must restart the server with the --skip-grant-tables option and edit your `/etc/hosts' or `\windows\hosts' file to add an entry for your host.
If you updated an existing MySQL installation from a version earlier than Version 3.22.11 to Version 3.22.11 or later, did you run the mysql_fix_privilege_tables script? If not, do so. The structure of the grant tables changed with MySQL Version 3.22.11 when the GRANT statement became functional.

If you can't get your password to work, remember that you must use the PASSWORD() function if you set the password with the INSERT, UPDATE, or SET PASSWORD statements. The PASSWORD() function is unnecessary if you specify the password using the GRANT ... INDENTIFIED BY statement or the mysqladmin password command.
localhost is a synonym for your local hostname, and is also the default host to which clients try to connect if you specify no host explicitly. However, connections to localhost do not work if you are running on a system that uses MIT-pthreads (localhost connections are made using Unix sockets, which are not supported by MIT-pthreads). To avoid this problem on such systems, you should use the --host option to name the server host explicitly. This will make a TCP/IP connection to the mysqld server. In this case, you must have your real hostname in user table entries on the server host. (This is true even if you are running a client program on the same host as the server.)
If you get an Access denied error when trying to connect to the database with mysql -u user_name db_name, you may have a problem with the user table. Check this by executing mysql -u root mysql and issuing this SQL statement:
mysql> SELECT * FROM user;

The result should include an entry with the Host and User columns matching your computer's hostname and your MySQL user name.
The Access denied error message will tell you who you are trying to log in as, the host from which you are trying to connect, and whether or not you were using a password. Normally, you should have one entry in the user table that exactly matches the hostname and user name that were given in the error message. For example if you get an error message that contains Using password: NO, this means that you tried to login without an password.
If you get the following error when you try to connect from a different host than the one on which the MySQL server is running, then there is no row in the user table that matches that host:
Host ... is not allowed to connect to this MySQL server

You can fix this by using the command-line tool mysql (on the server host!) to add a row to the user, db, or host table for the user/hostname combination from which you are trying to connect and then execute mysqladmin flush-privileges. If you are not running MySQL Version 3.22 and you don't know the IP number or hostname of the machine from which you are connecting, you should put an entry with '%' as the Host column value in the user table and restart mysqld with the --log option on the server machine. After trying to connect from the client machine, the information in the MySQL log will indicate how you really did connect. (Then replace the '%' in the user table entry with the actual hostname that shows up in the log. Otherwise, you'll have a system that is insecure.) Another reason for this error on Linux is that you are using a binary MySQL version that is compiled with a different glibc version than the one you are using. In this case you should either upgrade your OS/glibc or download the source MySQL version and compile this yourself. A source RPM is normally trivial to compile and install, so this isn't a big problem.
If you get an error message where the hostname is not shown or where the hostname is an IP, even if you try to connect with a hostname:
shell> mysqladmin -u root -pxxxx -h some-hostname ver
Access denied for user: 'root' (Using password: YES)

This means that MySQL got some error when trying to resolve the IP to a hostname. In this case you can execute mysqladmin flush-hosts to reset the internal DNS cache. Some permanent solutions are:
Try to find out what is wrong with your DNS server and fix this.
Specify IPs instead of hostnames in the MySQL privilege tables.
Start mysqld with --skip-name-resolve.
Start mysqld with --skip-host-cache.
Connect to localhost if you are running the server and the client on the same machine.
Put the client machine names in /etc/hosts.
If mysql -u root test works but mysql -h your_hostname -u root test results in Access denied, then you may not have the correct name for your host in the user table. A common problem here is that the Host value in the user table entry specifies an unqualified hostname, but your system's name resolution routines return a fully qualified domain name (or vice-versa). For example, if you have an entry with host 'tcx' in the user table, but your DNS tells MySQL that your hostname is 'tcx.subnet.se', the entry will not work. Try adding an entry to the user table that contains the IP number of your host as the Host column value. (Alternatively, you could add an entry to the user table with a Host value that contains a wild card--for example, 'tcx.%'. However, use of hostnames ending with `%' is insecure and is not recommended!) If mysql -u user_name test works but mysql -u user_name other_db_name doesn't work, you don't have an entry for other_db_name listed in the db table.
If mysql -u user_name db_name works when executed on the server machine, but mysql -u host_name -u user_name db_name doesn't work when executed on another client machine, you don't have the client machine listed in the user table or the db table. If you can't figure out why you get Access denied, remove from the user table all entries that have Host values containing wild cards (entries that contain `%' or `_'). A very common error is to insert a new entry with Host='%' and User='some user', thinking that this will allow you to specify localhost to connect from the same machine. The reason that this doesn't work is that the default privileges include an entry with Host='localhost' and User=''. Because that entry has a Host value 'localhost' that is more specific than '%', it is used in preference to the new entry when connecting from localhost! The correct procedure is to insert a second entry with Host='localhost' and User='some_user', or to remove the entry with Host='localhost' and User=''.
If you get the following error, you may have a problem with the db or host table:
Access to database denied

If the entry selected from the db table has an empty value in the Host column, make sure there are one or more corresponding entries in the host table specifying which hosts the db table entry applies to. If you get the error when using the SQL commands SELECT ... INTO OUTFILE or LOAD DATA INFILE, your entry in the user table probably doesn't have the file privilege enabled.
Remember that client programs will use connection parameters specified in configuration files or environment variables. If a client seems to be sending the wrong default connection parameters when you don't specify them on the command line, check your environment and the `.my.cnf' file in your home directory. You might also check the system-wide MySQL configuration files, though it is far less likely that client connection parameters will be specified there. If you get Access denied when you run a client without any options, make sure you haven't specified an old password in any of your option files!

If you make changes to the grant tables directly (using an INSERT or UPDATE statement) and your changes seem to be ignored, remember that you must issue a FLUSH PRIVILEGES statement or execute a mysqladmin flush-privileges command to cause the server to re-read the privilege tables. Otherwise your changes have no effect until the next time the server is restarted. Remember that after you set the root password with an UPDATE command, you won't need to specify it until after you flush the privileges, because the server won't know you've changed the password yet!
If you have access problems with a Perl, PHP, Python, or ODBC program, try to connect to the server with mysql -u user_name db_name or mysql -u user_name -pyour_pass db_name. If you are able to connect using the mysql client, there is a problem with your program and not with the access privileges. (Note that there is no space between -p and the password; you can also use the --password=your_pass syntax to specify the password. If you use the -p option alone, MySQL will prompt you for the password.)
For testing, start the mysqld daemon with the --skip-grant-tables option. Then you can change the MySQL grant tables and use the mysqlaccess script to check whether or not your modifications have the desired effect. When you are satisfied with your changes, execute mysqladmin flush-privileges to tell the mysqld server to start using the new grant tables. Note: Reloading the grant tables overrides the --skip-grant-tables option. This allows you to tell the server to begin using the grant tables again without bringing it down and restarting it.
If everything else fails, start the mysqld daemon with a debugging option (for example, --debug=d,general,query). This will print host and user information about attempted connections, as well as information about each command issued.

If you have any other problems with the MySQL grant tables and feel you must post the problem to the mailing list, always provide a dump of the MySQL grant tables। You can dump the tables with the mysqldump mysql command. As always, post your problem using the mysqlbug script.

Replication in MySQL
One way replication can be used is to increase both robustness and speed. For robustness you can have two systems and can switch to the backup if you have problems with the master. The extra speed is achieved by sending a part of the non-updating queries to the replica server. Of course this only works if non-updating queries dominate, but that is the normal case.

Starting in Version 3.23.15, MySQL supports one-way replication internally. One server acts as the master, while the other acts as the slave. Note that one server could play the roles of master in one pair and slave in the other. The master server keeps a binary log of updates and an index file to binary logs to keep track of log rotation. The slave, upon connecting, informs the master where it left off since the last successfully propagated update, catches up on the updates, and then blocks and waits for the master to notify it of the new updates.

Note that if you are replicating a database, all updates to this database should be done through the master!

On older servers one can use the update log to do simple replication.

Another benefit of using replication is that one can get live backups of the system by doing a backup on a slave instead of doing it on the master.

MySQL - Replication Implementation Overview
MySQL replication is based on the server keeping track of all changes to your database (updates, deletes, etc) in the binary log. and the slave server(s) reading the saved queries from the master server's binary log so that the slave can execute the same queries on its copy of the data.

It is very important to realize that the binary log is simply a record starting from a fixed point in time (the moment you enable binary logging). Any slaves which you set up will need copies of all the data from your master as it existed the moment that you enabled binary logging on the master. If you start your slaves with data that doesn't agree with what was on the master when the binary log was started, your slaves may fail.

A future version (4.0) of MySQL will remove the need to keep a (possibly large) snapshot of data for new slaves that you might wish to set up through the live backup functionality with no locking required. However, at this time, it is necessary to block all writes either with a global read lock or by shutting down the master while taking a snapshot.

Once a slave is properly configured and running, it will simply connect to the master and wait for updates to process. If the master goes away or the slave loses connectivity with your master, it will keep trying to connect every master-connect-retry seconds until it is able to reconnect and resume listening for updates.

Each slave keeps track of where it left off. The master server has no knowledge of how many slaves there are or which ones are up-to-date at any given time.

MySQL - HOWTO
Below is a quick description of how to set up complete replication on your current MySQL server. It assumes you want to replicate all your databases and have not configured replication before. You will need to shutdown your master server briefly to complete the steops outlined below.

Make sure you have a recent version of MySQL installed on the master and slave(s). Use Version 3.23.29 or higher. Previous releases used a different binary log format and had bugs which have been fixed in newer releases. Please, do not report bugs until you have verified that the problem is present in the latest release.
Set up special a replication user on the master with the FILE privilege and permission to connect from all the slaves. If the user is only doing replication (which is recommended), you don't need to grant any additional privileges. For example, to create a user named repl which can access your master from any host, you might use this command:
GRANT FILE ON *.* TO repl@"%" IDENTIFIED BY '';

Shut down MySQL on the master.
mysqladmin -u root -p shutdown

Snapshot all the data on your master server. The easiest way to do this (on Unix) is to simply use tar to produce an archvie of your entrie data directory. The exact data directory location depends on your installation.
tar -cvf /tmp/mysql-snapshot.tar /path/to/data-dir

Windows users can use WinZip or similar software to create an archive of the data directory.
In my.cnf on the master add log-bin and server-id=unique number to the [mysqld] section and restart it. It is very important that the id of the slave is different from the id of the master. Think of server-id as something similar to the IP address - it uniquely identifies the server instance in the comminity of replication partners.
[mysqld]
log-bin
server-id=1

Restart MySQL on the master.
Add the following to my.cnf on the slave(s):
master-host=
master-user=
master-password=
master-port=
server-id=

replacing the values in <> with what is relevant to your system. server-id must be different for each server participating in replication. If you don't specify a server-id, it will be set to 1 if you have not defined master-host, else it will be set to 2. Note that in the case of server-id omission the master will refuse connections from all slaves, and the slave will refuse to connect to a master. Thus, omitting server-id is only good for backup with a binary log.
Copy the snapshot data into your data directory on your slave(s). Make sure that the privileges on the files and directories are correct. The user which MySQL runs as needs to be able to read and write to them, just as on the master.
Restart the slave(s).
After you have done the above, the slave(s) should connect to the master and catch up on any updates which happened since the snapshot was taken.

If you have forgotten to set server-id for the slave you will get the following error in the error log file:

Warning: one should set server_id to a non-0 value if master_host is set.
The server will not act as a slave.

If you have forgot to do this for the master, the slaves will not be able to connect to the master.

If a slave is not able to replicate for any reason, you will find error messages in the error log on the slave.

Once a slave is replicating, you will find a file called master।info in the same directory as your error log. The master.info file is used by the slave to keep track of how much of the master's binary log is has processed. Do not remove or edit the file, unless you really know what you are doing. Even in that case, it is preferred that you use CHANGE MASTER TO command.

MySQL - Replication Features and known problems
Below is an explanation of what is supported and what is not:

Replication will be done correctly with AUTO_INCREMENT, LAST_INSERT_ID, and TIMESTAMP values.
RAND() in updates does not replicate properly. Use RAND(some_non_rand_expr) if you are replicating updates with RAND(). You can, for example, use UNIX_TIMESTAMP() for the argument to RAND().
LOAD DATA INFILE will be handled properly as long as the file still resides on the master server at the time of update propagation. LOAD LOCAL DATA INFILE will be skipped.
Update queries that use user variables are not replication-safe (yet).
Temporary tables starting in 3.23.29 are replicated properly with the exception of the case when you shut down slave server ( not just slave thread), you have some temporary tables open, and the are used in subsequent updates. To deal with this problem, to shut down the slave, do SLAVE STOP, then check Slave_open_temp_tables variable to see if it is 0, then issue mysqladmin shutdown. If the number is not 0, restart the slave thread with SLAVE START and see if you have better luck next time. There will be a cleaner solution, but it has to wait until version 4.0. In earlier versions temporary tables are not being replicated properly - we recommend that you either upgrade, or execute SET SQL_LOG_BIN=0 on your clients before all queries with temp tables.
MySQL only supports one master and many slaves. We will in 4.x add a voting algorithm to automatically change master if something goes wrong with the current master. We will also introduce 'agent' processes to help doing load balancing by sending select queries to different slaves.
Starting in Version 3.23.26, it is safe to connect servers in a circular master-slave relationship with log-slave-updates enabled. Note, however, that many queries will not work right in this kind of setup unless your client code is written to take care of the potential problems that can happen from updates that occur in different sequence on different servers. Note that the log format has changed in Version 3.23.26 so that pre-3.23.26 slaves will not be able to read it.
If the query on the slave gets an error, the slave thread will terminate, and a message will appear in the .err file. You should then connect to the slave manually, fix the cause of the error (for example, non-existent table), and then run SLAVE START sql command (available starting in Version 3.23.16). In Version 3.23.15, you will have to restart the server.
If connection to the master is lost, the slave will retry immediately, and then in case of failure every master-connect-retry (default 60) seconds. Because of this, it is safe to shut down the master, and then restart it after a while. The slave will also be able to deal with network connectivity outages.
Shutting down the slave (cleanly) is also safe, as it keeps track of where it left off. Unclean shutdowns might produce problems, especially if disk cache was not synced before the system died. Your system fault tolerance will be greatly increased if you have a good UPS.
If the master is listening on a non-standard port, you will also need to specify this with master-port parameter in my.cnf . In Version 3.23.15, all of the tables and databases will be replicated. Starting in Version 3.23.16, you can restrict replication to a set of databases with replicate-do-db directives in my.cnf or just exclude a set of databases with replicate-ignore-db. Note that up until Version 3.23.23, there was a bug that did not properly deal with LOAD DATA INFILE if you did it in a database that was excluded from replication.
Starting in Version 3.23.16, SET SQL_LOG_BIN = 0 will turn off replication (binary) logging on the master, and SET SQL_LOG_BIN = 1 will turn in back on - you must have the process privilege to do this.
Starting in Version 3.23.19, you can clean up stale replication leftovers when something goes wrong and you want a clean start with FLUSH MASTER and FLUSH SLAVE commands. In Version 3.23.26 we have renamed them to RESET MASTER and RESET SLAVE respectively to clarify what they do. The old FLUSH variants still work, though, for compatibility.
Starting in Version 3.23.21, you can use LOAD TABLE FROM MASTER for network backup and to set up replication initially. We have recently received a number of bug reports concerning it that we are investigating, so we recommend that you use it only in testing until we make it more stable.
Starting in Version 3.23.23, you can change masters and adjust log position with CHANGE MASTER TO.
Starting in Version 3.23.23, you tell the master that updates in certain databases should not be logged to the binary log with binlog-ignore-db.
Starting in Version 3.23.26, you can use replicate-rewrite-db to tell the slave to apply updates from one database on the master to the one with a different name on the slave.
Starting in Version 3.23.28, you can use PURGE MASTER LOGS TO 'log-name' to get rid of old logs while the slave is running. 11.5 Replication Options in my.cnf
If you are using replication, we recommend you to use MySQL Version 3.23.30 or later. Older versions work, but they do have some bugs and are missing some features.

On both master and slave you need to use the server-id option. This sets an unique replication id. You should pick a unique value in the range between 1 to 2^32-1 for each master and slave. Example: server-id=3

The following table has the options you can use for the MASTER:

Option Description
log-bin=filename Write to a binary update log to the specified location. Note that if you give it a parameter with an extension (for example, log-bin=/mysql/logs/replication. log ) versions up to 3.23.24 will not work right during replication if you do FLUSH LOGS . The problem is fixed in Version 3.23.25. If you are using this kind of log name, FLUSH LOGS will be ignored on binlog. To clear the log, run FLUSH MASTER, and do not forget to run FLUSH SLAVE on all slaves. In Version 3.23.26 and in later versions you should use RESET MASTER and RESET SLAVE
log-bin-index=filename Because the user could issue the FLUSH LOGS command, we need to know which log is currently active and which ones have been rotated out and in what sequence. This information is stored in the binary log index file. The default is `hostname`.index. You can use this option if you want to be a rebel. (Example: log-bin-index=db.index) sql-bin-update-same If set, setting SQL_LOG_BIN to a value will automatically set SQL_LOG_UPDATE to the same value and vice versa.
binlog-do-db=database_name Tells the master it should log updates for the specified database, and exclude all others not explicitly mentioned. (Example: binlog-do-db=some_database)
binlog-ignore-db=database_name Tells the master that updates to the given database should not be logged to the binary log (Example: binlog-ignore-db=some_database)

The following table has the options you can use for the SLAVE:

Option Description
master-host=host Master hostname or IP address for replication. If not set, the slave thread will not be started. (Example: master-host=db-master.mycompany.com)
master-user=username The user the slave thread will us for authentication when connecting to the master. The user must have FILE privilege. If the master user is not set, user test is assumed. (Example: master-user=scott)
master-password=password The password the slave thread will authenticate with when connecting to the master. If not set, an empty password is assumed. (Example: master-password=tiger)
master-port=portnumber The port the master is listening on. If not set, the compiled setting of MYSQL_PORT is assumed. If you have not tinkered with configure options, this should be 3306. (Example: master-port=3306)
master-connect-retry=seconds The number of seconds the slave thread will sleep before retrying to connect to the master in case the master goes down or the connection is lost. Default is 60. (Example: master-connect-retry=60)
master-info-file=filename The location of the file that remembers where we left off on the master during the replication process. The default is master.info in the data directory. Sasha: The only reason I see for ever changing the default is the desire to be rebelious. (Example: master-info-file=master.info)
replicate-do-table=db_name.table_name Tells the slave thread to restrict replication to the specified database. To specify more than one table, use the directive multiple times, once for each table. . (Example:
replicate-do-table=some_db.some_table)
replicate-ignore-table=db_name.table_name Tells the slave thread to not replicate to the specified table. To specify more than one table to ignore, use the directive multiple times, once for each table.(Example:
replicate-ignore-table=db_name.some_table)
replicate-wild-do-table=db_name.table_name Tells the slave thread to restrict replication to the tables that match the specified wildcard pattern. . To specify more than one table, use the directive multiple times, once for each table. . (Example: replicate-do-table=foo%.bar% will replicate only updates to tables in all databases that start with foo and whose table names start with bar)
replicate-wild-ignore-table=db_name.table_name Tells the slave thread to not replicate to the tables that match the given wild card pattern. To specify more than one table to ignore, use the directive multiple times, once for each table.(Example: replicate-ignore-table=foo%.bar% - will not upates to tables in all databases that start with foo and whose table names start with bar)
replicate-ignore-db=database_name Tells the slave thread to not replicate to the specified database. To specify more than one database to ignore, use the directive multiple times, once for each database. This option will not work if you use cross database updates. If you need cross database updates to work, make sure you have 3.23.28 or later, and use replicate-wild-ignore-table=db_name.%(Example: replicate-ignore-db=some_db)
replicate-do-db=database_name Tells the slave thread to restrict replication to the specified database. To specify more than one database, use the directive multiple times, once for each database. Note that this will only work if you do not use cross-database queries such as UPDATE some_db.some_table SET foo='bar' while having selected a different or no database. If you need cross database updates to work, make sure you have 3.23.28 or later, and use replicate-wild-do-table=db_name.% (Example: replicate-do-db=some_db)
log-slave-updates Tells the slave to log the updates from the slave thread to the binary log. Off by default. You will need to turn it on if you plan to daisy-chain the slaves.
replicate-rewrite-db=from_name->to_name Updates to a database with a different name than the original (Example: replicate-rewrite-db=master_db_name->slave_db_name
skip-slave-start Tells the slave server not to start the slave on the startup। The user can start it later with SLAVE START.

MySQL - SQL Commands Related to Replication
Replication can be controlled through the SQL interface. Below is the summary of commands:

Command Description
SLAVE START Starts the slave thread. (Slave)
SLAVE STOP Stops the slave thread. (Slave)
SET SQL_LOG_BIN=0 Disables update logging if the user has process privilege. Ignored otherwise. (Master)
SET SQL_LOG_BIN=1 Re-enables update logging if the user has process privilege. Ignored otherwise. (Master)
SET SQL_SLAVE_SKIP_COUNTER=n Skip the next n events from the master. Only valid when the slave thread is not running, otherwise, gives an error. Useful for recovering from replication glitches.
RESET MASTER Deletes all binary logs listed in the index file, resetting the binlog index file to be empty. In pre-3.23.26 versions, FLUSH MASTER (Master)
RESET SLAVE Makes the slave forget its replication position in the master logs. In pre 3.23.26 versions the command was called FLUSH SLAVE(Slave)
LOAD TABLE tblname FROM MASTER Downloads a copy of the table from master to the slave. (Slave)
CHANGE MASTER TO master_def_list Changes the master parameters to the values specified in master_def_list and restarts the slave thread. master_def_list is a comma-separated list of master_def where master_def is one of the following: MASTER_HOST, MASTER_USER, MASTER_PASSWORD, MASTER_PORT, MASTER_CONNECT_RETRY, MASTER_LOG_FILE, MASTER_LOG_POS. Example:

CHANGE MASTER TO
MASTER_HOST='master2.mycompany.com',
MASTER_USER='replication',
MASTER_PASSWORD='bigs3cret',
MASTER_PORT=3306,
MASTER_LOG_FILE='master2-bin.001',
MASTER_LOG_POS=4;

You only need to specify the values that need to be changed. The values that you omit will stay the same with the exception of when you change the host or the port. In that case, the slave will assume that since you are connecting to a different host or a different port, the master is different. Therefore, the old values of log and position are not applicable anymore, and will automatically be reset to an empty string and 0, respectively (the start values). Note that if you restart the slave, it will remember its last master. If this is not desirable, you should delete the `master.info' file before restarting, and the slave will read its master from my.cnf or the command line. (Slave)
SHOW MASTER STATUS Provides status information on the binlog of the master. (Master)
SHOW SLAVE STATUS Provides status information on essential parameters of the slave thread. (Slave)
SHOW MASTER LOGS Only available starting in Version 3.23.28. Lists the binary logs on the master. You should use this command prior to PURGE MASTER LOGS TO to find out how far you should go.
PURGE MASTER LOGS TO 'logname' Available starting in Version 3.23.28. Deletes all the replication logs that are listed in the log index as being prior to the specified log, and removed them from the log index, so that the given log now becomes first. Example:
PURGE MASTER LOGS TO 'mysql-bin.010'

This command will do nothing and fail with an error if you have an active slave that is currently reading one of the logs you are trying to delete. However, if you have a dormant slave,and happen to purge one of the logs it wants to read, the slave will be unable to replicate once it comes up. The command is safe to run while slaves are replicating - you do not need to stop them. You must first check all the slaves with SHOW SLAVE STATUS to see which log they are on, then do a listing of the logs on the master with SHOW MASTER LOGS, find the earliest log among all the slaves (if all the slaves are up to date, this will be the last log on the list), backup all the logs you are about to delete (optional) and purge up to the target log.

MySQL - Replication FAQ
Why do I sometimes see more than one Binlog_Dump thread on the master after I have restarted the slave?

Binlog_Dump is a continuous process that is handled by the server in the following way:
Catch up on the updates.
Once there are no more updates left, go into pthread_cond_wait(), from which we can be awakened either by an update or a kill.
On wake up, check the reason. If we are not supposed to die, continue the Binlog_dump loop.
If there is some fatal error, such as detecting a dead client, terminate the loop.
So if the slave thread stops on the slave, the corresponding Binlog_Dump thread on the master will not notice it until after at least one update to the master (or a kill), which is needed to wake it up from pthread_cond_wait(). In the meantime, the slave could have opened another connection, which resulted in another Binlog_Dump thread.

The above problem should not be present in Version 3.23.26 and later versions. In Version 3.23.26 we added server-id to each replication server, and now all the old zombie threads are killed on the master when a new replication thread connects from the same slave

How do I rotate replication logs?
In Version 3.23.28 you should use PURGE MASTER LOGS TO command after determining which logs can be deleted, and optionally backing them up first. In earlier versions the process is much more painful, and cannot be safely done without stopping all the slaves in the case that you plan to re-use log names. You will need to stop the slave threads, edit the binary log index file, delete all the old logs, restart the master, start slave threads,and then remove the old log files.

How do I upgrade on a hot replication setup?
If you are upgrading pre-3.23.26 versions, you should just lock the master tables, let the slave catch up, then run FLUSH MASTER on the master, and FLUSH SLAVE on the slave to reset the logs, then restart new versions of the master and the slave. Note that the slave can stay down for some time - since the master is logging all the updates, the slave will be able to catch up once it is up and can connect.
After 3.23.26, we have locked the replication protocol for modifications, so you can upgrade masters and slave on the fly to a newer 3.23 version and you can have different versions of MySQL running on the slave and the master, as long as they are both newer than 3.23.26.

What issues should I be aware of when setting up two-way replication?
MySQL replication currently does not support any locking protocol between master and slave to guarantee the atomicity of a distributed (cross-server) update. In in other words, it is possible for client A to make an update to co-master 1, and in the meantime, before it propagates to co-master 2, client B could make an update to co-master 2 that will make the update of client A work differently than it did on co-master 1. Thus when the update of client A will make it to co-master 2, it will produce tables that will be different than what you have on co-master 1, even after all the updates from co-master 2 have also propagated. So you should not co-chain two servers in a two-way replication relationship, unless you are sure that you updates can safely happen in any order, or unless you take care of mis-ordered updates somehow in the client code.
You must also realize that two-way replication actually does not improve performance very much, if at all, as far as updates are concerned. Both servers need to do the same amount of updates each, as you would have one server do. The only difference is that there will be a little less lock contention, because the updates originating on another server will be serialized in one slave thread. This benefit, though, might be offset by network delays.

How can I use replication to improve performance of my system?
You should set up one server as the master, and direct all writes to it, and configure as many slaves as you have the money and rackspace for, distributing the reads among the master and the slaves. You can also start the slaves with --skip-bdb, --low-priority-updates and --delay-key-write-for-all-tables to get speed improvements for the slave. In this case the slave will use non-transactional MyISAM tables instead of BDB tables to get more speed.

What should I do to prepare my client code to use performance-enhancing replication?
A: If the part of your code that is responsible for database access has been properly abstracted/modularized, converting it to run with the replicated setup should be very smooth and easy - just change the implementation of your database access to read from some slave or the master, and to awlays write to the master. If your code does not have this level of abstraction, setting up a replicated system will give you an opportunity/motivation to it clean up. You should start by creating a wrapper library /module with the following functions:

safe_writer_connect()
safe_reader_connect()
safe_reader_query()
safe_writer_query()
safe_ means that the function will take care of handling all the error conditions.

You should then convert your client code to use the wrapper library. It may be a painful and scary process at first, but it will pay off in the long run. All applications that follow the above pattern will be able to take advantage of one-master/many slaves solution. The code will be a lot easier to maintain, and adding troubleshooting options will be trivial. You will just need to modify one or two functions, for example, to log how long each query took, or which query, among your many thousands, gave you an error. If you have written a lot of code already, you may want to automate the conversion task by using Monty's replace utility, which comes with the standard distribution of MySQL, or just write your own Perl script. Hopefully, your code follows some recognizable pattern. If not, then you are probably better off re-writing it anyway, or at least going through and manually beating it into a pattern.

Note that, of course, you can use different names for the functions. What is important is having unified interface for connecting for reads, connecting for writes, doing a read, and doing a write.

When and how much can MySQL replication improve the performance of my system?
MySQL replication is most beneficial for a system with frequent reads and not so frequent writes. In theory, by using a one master/many slaves setup you can scale by adding more slaves until you either run out of network bandwidth, or your update load grows to the point that the master cannot handle it.

In order to determine how many slaves you can get before the added benefits begin to level out, and how much you can improve performance of your site, you need to know your query patterns, and empirically (by benchmarking) determine the relationship between the throughput on reads (reads per second, or max_reads) and on writes max_writes) on a typical master and a typical slave. The example below will show you a rather simplified calculation of what you can get with replication for our imagined system.

Let's say our system load consists of 10% writes and 90% reads, and we have determined that max_reads = 1200 - 2 * max_writes, or in other words, our system can do 1200 reads per second with no writes, our average write is twice as slow as average read, and the relationship is linear. Let us suppose that our master and slave are of the same capacity, and we have N slaves and 1 master. Then we have for each server (master or slave):

reads = 1200 - 2 * writes (from bencmarks)

reads = 9* writes / (N + 1) (reads split, but writes go to all servers)

9*writes/(N+1) + 2 * writes = 1200

writes = 1200/(2 + 9/(N+1)

So if N = 0, which means we have no replication, our system can handle 1200/11, about 109 writes per second (which means we will have 9 times as many reads due to the nature of our application).

If N = 1, we can get up to 184 writes per second.

If N = 8, we get up to 400.

If N = 17, 480 writes.

Eventually as N approaches infinity (and our budget negative infinity), we can get very close to 600 writes per second, increasing system throughput about 5.5 times. However, with only 8 servers, we increased it almost 4 times already.

Note that our computations assumed infinite network bandwidth, and neglected several other factors that could turn out to be signficant on your system. In many cases, you may not be able to make a computation similar to the one above that will accurately predict what will happen on your system if you add N replication slaves. However, answering the following questions should help you decided whether and how much, if at all, the replication will improve the performance of your system:
What is the read/write ratio on your system?
How much more write load can one server handle if you reduce the reads?
How many slaves do you have bandwidth for on your network?

How can I use replication to provide redundancy/high availability?
With the currently available features, you would have to set up a master and a slave (or several slaves), and write a script that will monitor the master to see if it is up, and instruct your applications and the slaves of the master change in case of failure. Some suggestions:
To tell a slave to change the master use the CHANGE MASTER TO command.
A good way to keep your applications informed where the master is by having a dynamic DNS entry for the master. With bind you can use nsupdate to dynamically update your DNS.
You should run your slaves with the log-bin option and without log-slave-updates. This way the slave will be ready to become a master as soon as you issue STOP SLAVE; RESET MASTER, and CHANGE MASTER TO on the other slaves. It will also help you catch spurious updates that may happen because of misconfiguration of the slave (ideally, you want to configure access rights so that no client can update the slave, except for the slave thread) combined with the bugs in your client programs (they should never update the slave directly).
We are currently working on intergrating an automatic master election system into MySQL, but until it is ready, you will have to create your own monitoring tools।

MySQL - Troubleshooting Replication
If you have followed the instructions, and your replication setup is not working, first elliminate the user error factor by checking the following:

Is the master logging to the binary log? Check with SHOW MASTER STATUS. If it is, Position will be non-zero. If not, verify that you have given the master log-bin option and have set server-id.
Is the slave running? Check with SHOW SLAVE STATUS. The answer is found in Slave_running column. If not, verify slave options and check the error log for messages.
If the slave is running, did it establish connection with the master? Do SHOW PROCESSLIST, find the thread with system user value in User column and none in the Host column, and check the State column. If it says connecting to master, verify the privileges for the replication user on the master, master host name, your DNS setup, whether the master is actually running, whether it is reachable from the slave, and if all that seems ok, read the error logs.
If the slave was running, but then stopped, look at SHOW SLAVE STATUS output andcheck the error logs. It usually happens when some query that succeeded on the master fails on the slave. This should never happen if you have taken a proper snapshot of the master, and never modify the data on the slave outside of the slave thread. If it does, it is a bug, read below on how to report it.
If a query on that succeeded on the master refuses to run on the slave, and a full database resync ( the proper thing to do ) does not seem feasible, try the following:
First see if there is some stray record in the way. Understand how it got there, then delete it and run SLAVE START If the above does not work or does not apply, try to understand if it would be safe to make the update manually ( if needed) and then ignore the next query from the master.
If you have decided you can skip the next query, do SET SQL_SLAVE_SKIP_COUNTER=1; SLAVE START; to skip a query that does not use auto_increment, last_insert_id or timestamp, or SET SQL_SLAVE_SKIP_COUNTER=2; SLAVE START; otherwise If you are sure the slave started out perfectly in sync with the master, and no one has updated the tables involved outside of slave thread, report the bug, so you will not have to do the above tricks again.
Make sure you are not running into an old bug by upgrading to the most recent version.
If all else fails, read the error logs. If they are big, grep -i slave /path/to/your-log.err on the slave. There is no generic pattern to search for on the master, as the only errors it logs are general system errors - if it can, it will send the error to the slave when things go wrong.
When you have determined that there is no user error involved, and replication still either does not work at all or is unstable, it is time to start working on a bug report. We need to get as much info as possible from you to be able to track down the bug. Please do spend some time and effort preparing a good bug report. Ideally, we would like to have a test case in the format found in mysql-test/t/rpl* directory of the source tree. If you submit a test case like that, you can expect a patch within a day or two in most cases, although, of course, you mileage may vary depending on a number of factors.

Second best option is a just program with easily configurable connection arguments for the master and the slave that will demonstrate the problem on our systems. You can write one in Perl or in C, depending on which language you know better.

If you have one of the above ways to demonstrate the bug, use mysqlbug to prepare a bug report and send it to bugs@lists.mysql.com. If you have a phantom - a problem that does occur but you cannot duplicate "at will":

Verify that there is no user error involved. For example, if you update the slave outside of the slave thread, the data will be out of sync, and you can have unique key violations on updates, in which case the slave thread will stop and wait for you to clean up the tables manually to bring them in sync.
Run slave with log-slave-updates and log-bin - this will keep a log of all updates on the slave.
Save all evidence before reseting the replication. If we have no or only sketchy information, it would take us a while to track down the problem. The evidence you should collect is:
All binary logs on the master
All binary log on the slave
The output of SHOW MASTER STATUS on the master at the time you have discovered the problem
The output of SHOW SLAVE STATUS on the master at the time you have discovered the problem
Error logs on the master and on the slave
Use mysqlbinlog to examine the binary logs. The following should be helpful to find the trouble query, for example:
mysqlbinlog -j pos_from_slave_status /path/to/log_from_slave_status | head

Once you have collected the evidence on the phantom problem, try hard to isolate it into a separate test case first. Then report the problem to bugs@lists.mysql.com with as much info as possible.


Getting Maximum Performance from MySQL


Optimization is a complicated task because it ultimately requires understanding of the whole system. While it may be possible to do some local optimizations with small knowledge of your system/application, the more optimal you want your system to become the more you will have to know about it.
So this chapter will try to explain and give some examples of different ways to optimize MySQL. But remember that there are always some (increasingly harder) additional ways to make the system even faster.

MySQL - Optimization Overview
The most important part for getting a system fast is of course the basic design. You also need to know what kinds of things your system will be doing, and what your bottlenecks are.

The most common bottlenecks are:

Disk seeks. It takes time for the disk to find a piece of data. With modern disks in 1999, the mean time for this is usually lower than 10ms, so we can in theory do about 1000 seeks a second. This time improves slowly with new disks and is very hard to optimize for a single table. The way to optimize this is to spread the data on more than one disk. Disk reading/writing. When the disk is at the correct position we need to read the data. With modern disks in 1999, one disk delivers something like 10-20Mb/s. This is easier to optimize than seeks because you can read in parallel from multiple disks.
CPU cycles. When we have the data in main memory (or if it already were there) we need to process it to get to our result. Having small tables compared to the memory is the most common limiting factor. But then, with small tables speed is usually not the problem.
Memory bandwidth। When the CPU needs more data than can fit in the CPU cache the main memory bandwidth becomes a bottleneck. This is an uncommon bottleneck for most systems, but one should be aware of it.

MySQL - System/Compile Time and Startup Parameter Tuning
We start with the system level things since some of these decisions have to be made very early. In other cases a fast look at this part may suffice because it not that important for the big gains. However, it is always nice to have a feeling about how much one could gain by changing things at this level.

The default OS to use is really important! To get the most use of multiple CPU machines one should use Solaris (because the threads works really nice) or Linux (because the 2.2 kernel has really good SMP support). Also on 32-bit machines Linux has a 2G file size limit by default. Hopefully this will be fixed soon when new filesystems are released (XFS/Reiserfs). If you have a desperate need for files bigger than 2G on Linux-Intel 32 bit, you should get the LFS patch for the ext2 file system.

Because we have not run MySQL in production on that many platforms, we advice you to test your intended platform before choosing it, if possible.

Other tips:

If you have enough RAM, you could remove all swap devices. Some operating systems will use a swap device in some contexts even if you have free memory.
Use the --skip-locking MySQL option to avoid external locking. Note that this will not impact MySQL's functionality as long as you only run one server. Just remember to take down the server (or lock relevant parts) before you run myisamchk. On some system this switch is mandatory because the external locking does not work in any case. The --skip-locking option is on by default when compiling with MIT-pthreads, because flock() isn't fully supported by MIT-pthreads on all platforms. It's also on default for Linux as Linux file locking are not yet safe. The only case when you can't use --skip-locking is if you run multiple MySQL servers (not clients) on the same data, or run myisamchk on the table without first flushing and locking the mysqld server tables first. You can still use LOCK TABLES/UNLOCK TABLES even if you are using --skip-locking 12.2.1 How Compiling and Linking Affects the Speed of MySQL
Most of the following tests are done on Linux with the MySQL benchmarks, but they should give some indication for other operating systems and workloads.

You get the fastest executable when you link with -static.

On Linux, you will get the fastest code when compiling with pgcc and -O6. To compile `sql_yacc.cc' with these options, you need about 200M memory because gcc/pgcc needs a lot of memory to make all functions inline. You should also set CXX=gcc when configuring MySQL to avoid inclusion of the libstdc++ library (it is not needed). Note that with some versions of pgcc, the resulting code will only run on true Pentium processors, even if you use the compiler option that you want the resulting code to be working on all x586 type processors (like AMD).

By just using a better compiler and/or better compiler options you can get a 10-30 % speed increase in your application. This is particularly important if you compile the SQL server yourself!

We have tested both the Cygnus CodeFusion and Fujitsu compilers, but when we tested them, neither was sufficiently bug free to allow MySQL to be compiled with optimizations on.

When you compile MySQL you should only include support for the character sets that you are going to use. (Option --with-charset=xxx). The standard MySQL binary distributions are compiled with support for all character sets.

Here is a list of some measurements that we have done:

If you use pgcc and compile everything with -O6, the mysqld server is 1% faster than with gcc 2.95.2. If you link dynamically (without -static), the result is 13% slower on Linux. Note that you still can use a dynamic linked MySQL library. It is only the server that is critical for performance.
If you connect using TCP/IP rather than Unix sockets, the result is 7.5% slower on the same computer. (If you are connection to localhost, MySQL will, by default, use sockets).
If you compile with --with-debug=full, then you will loose 20 % for most queries, but some queries may take substantially longer (The MySQL benchmarks ran 35 % slower) If you use --with-debug, then you will only loose 15 %. On a Sun SPARCstation 20, SunPro C++ 4.2 is 5 % faster than gcc 2.95.2.
Compiling with gcc 2.95.2 for ultrasparc with the option -mcpu=v8 -Wa,-xarch=v8plusa gives 4 % more performance. On Solaris 2.5.1, MIT-pthreads is 8-12% slower than Solaris native threads on a single processor. With more load/CPUs the difference should get bigger.
Running with --log-bin makes [MySQL 1 % slower.
Compiling without frame pointers -fomit-frame-pointer with gcc makes MySQL 1 % faster.
The MySQL-Linux distribution provided by MySQL AB used to be compiled with pgcc, but we had to go back to regular gcc because of a bug in pgcc that would generate the code that does not run on AMD. We will continue using gcc until that bug is resolved. In the meantime, if you have a non-AMD machine, you can get a faster binary by compiling with pgcc. The standard MySqL Linux binary is linked statically to get it faster and more portable.

MySQL - Disk Issues
As mentioned before, disks seeks are a big performance bottleneck. This problems gets more and more apparent when the data starts to grow so large that effective caching becomes impossible. For large databases, where you access data more or less randomly, you can be sure that you will need at least one disk seek to read and a couple of disk seeks to write things. To minimize this problem, use disks with low seek times.
Increase the number of available disk spindles (and thereby reduce the seek overhead) by either symlink files to different disks or striping the disks.
Using symbolic links
This means that you symlink the index and/or data file(s) from the normal data directory to another disk (that may also be striped). This makes both the seek and read times better (if the disks are not used for other things).

Striping
Striping means that you have many disks and put the first block on the first disk, the second block on the second disk, and the Nth on the (N mod number_of_disks) disk, and so on. This means if your normal data size is less than the stripe size (or perfectly aligned) you will get much better performance. Note that striping is very dependent on the OS and stripe-size. So benchmark your application with different stripe-sizes.

Note that the speed
difference for striping is very dependent on the parameters. Depending on how you set the striping parameters and number of disks you may get a difference in orders of magnitude. Note that you have to choose to optimize for random or sequential access.
For reliability you may want to use RAID 0+1 (striping + mirroring), but in this case you will need 2*N drives to hold N drives of data. This is probably the best option if you have the money for it! You may, however, also have to invest in some volume-management software to handle it efficiently.
A good option is to have semi-important data (that can be regenerated) on RAID 0 disk while storing really important data (like host information and logs) on a RAID 0+1 or RAID N disk. RAID N can be a problem if you have many writes because of the time to update the parity bits.
You may also set the parameters for the file system that the database uses. One easy change is to mount the file system with the noatime option. That makes it skip the updating of the last access time in the inode and by this will avoid some disk seeks.
On Linux, you can get much more performance (up to 100 % under load is not uncommon) by using hdpram to configure your disk's interface! The following should be quite good hdparm options for MySQL (and probably many other applications): hdparm -m 16 -d 1

Note that the performance/reliability when using the above depends on your hardware, so we strongly suggest that you test your system throughly after using hdparm! Please consult the hdparm man page for more information! If hdparm is not used wisely, filesystem corruption may result. Backup everything before experimenting!
On many operating systems you can mount the disks with the 'async' flag to set the file system to be updated asynchronously. If your computer is reasonable stable, this should give you more performance without sacrificing too much reliability. (This flag is on by default on Linux.)
If you don't need to know when a file was last accessed (which is not really useful on a database server), you can mount your file systems with the noatime flag।

MySQL - Using Symbolic Links for Databases and Tables
You can move tables and databases from the database directory to other locations and replace them with symbolic links to the new locations. You might want to do this, for example, to move a database to a file system with more free space.

If MySQL notices that a table is symbolically linked, it will resolve the symlink and use the table it points to instead. This works on all systems that support the realpath() call (at least Linux and Solaris support realpath())! On systems that don't support realpath(), you should not access the table through the real path and through the symlink at the same time! If you do, the table will be inconsistent after any update.

MySQL doesn't that you link one directory to multiple databases. Replacing a database directory with a symbolic link will work fine as long as you don't make a symbolic link between databases. Suppose you have a database db1 under the MySQL data directory, and then make a symlink db2 that points to db1:

shell> cd /path/to/datadir
shell> ln -s db1 db2

Now, for any table tbl_a in db1, there also appears to be a table tbl_a in db2. If one thread updates db1.tbl_a and another thread updates db2.tbl_a, there will be problems.

If you really need this, you must change the following code in `mysys/mf_format.c':

if (flag & 32 || (!lstat(to,&stat_buff) && S_ISLNK(stat_buff.st_mode)))

to

if (1)

On Windows you can use internal symbolic links to directories by compiling MySQL with -DUSE_SYMDIR. This allows you to put different databases on different disks.

MySQL - Tuning Server Parameters
You can get the default buffer sizes used by the mysqld server with this command:

shell> mysqld --help

This command produces a list of all mysqld options and configurable variables. The output includes the default values and looks something like this:

Possible variables for option --set-variable (-O) are:
back_log current value: 5
bdb_cache_size current value: 1048540
binlog_cache_size current_value: 32768
connect_timeout current value: 5
delayed_insert_timeout current value: 300
delayed_insert_limit current value: 100
delayed_queue_size current value: 1000
flush_time current value: 0
interactive_timeout current value: 28800
join_buffer_size current value: 131072
key_buffer_size current value: 1048540
lower_case_table_names current value: 0
long_query_time current value: 10
max_allowed_packet current value: 1048576
max_binlog_cache_size current_value: 4294967295
max_connections current value: 100
max_connect_errors current value: 10
max_delayed_threads current value: 20
max_heap_table_size current value: 16777216
max_join_size current value: 4294967295
max_sort_length current value: 1024
max_tmp_tables current value: 32
max_write_lock_count current value: 4294967295
myisam_sort_buffer_size current value: 8388608
net_buffer_length current value: 16384
net_retry_count current value: 10
net_read_timeout current value: 30
net_write_timeout current value: 60
query_buffer_size current value: 0
record_buffer current value: 131072
slow_launch_time current value: 2
sort_buffer current value: 2097116
table_cache current value: 64
thread_concurrency current value: 10
tmp_table_size current value: 1048576
thread_stack current value: 131072
wait_timeout current value: 28800

If there is a mysqld server currently running, you can see what values it actually is using for the variables by executing this command:

shell> mysqladmin variables

You can find a full description for all variables in the SHOW VARIABLES section in this manual.

You can also see some statistics from a running server by issuing the command SHOW STATUS.

MySQL uses algorithms that are very scalable, so you can usually run with very little memory. If you, however, give MySQL more memory, you will normally also get better performance.

When tuning a MySQL server, the two most important variables to use are key_buffer_size and table_cache. You should first feel confident that you have these right before trying to change any of the other variables.

If you have much memory (>=256M) and many tables and want maximum performance with a moderate number of clients, you should use something like this:

shell> safe_mysqld -O key_buffer=64M -O table_cache=256 \
-O sort_buffer=4M -O record_buffer=1M &

If you have only 128M and only a few tables, but you still do a lot of sorting, you can use something like:

shell> safe_mysqld -O key_buffer=16M -O sort_buffer=1M

If you have little memory and lots of connections, use something like this:

shell> safe_mysqld -O key_buffer=512k -O sort_buffer=100k \
-O record_buffer=100k &

or even:

shell> safe_mysqld -O key_buffer=512k -O sort_buffer=16k \
-O table_cache=32 -O record_buffer=8k -O net_buffer=1K &

When you have installed MySQL, the `support-files' directory will contain some different my.cnf example files, `my-huge.cnf', `my-large.cnf', `my-medium.cnf', and `my-small.cnf', you can use as a base to optimize your system.
If there are very many connections, ``swapping problems'' may occur unless mysqld has been configured to use very little memory for each connection. mysqld performs better if you have enough memory for all connections, of course.
Note that if you change an option to mysqld, it remains in effect only for that instance of the server.
To see the effects of a parameter change, do something like this:

shell> mysqld -O key_buffer=32m --help

Make sure that the --help option is last; otherwise, the effect of any options listed after it on the command line will not be reflected in the output।

How MySQL Opens and Closes Tables ?
table_cache, max_connections, and max_tmp_tables affect the maximum number of files the server keeps open. If you increase one or both of these values, you may run up against a limit imposed by your operating system on the per-process number of open file descriptors. However, you can increase the limit on many systems. Consult your OS documentation to find out how to do this, because the method for changing the limit varies widely from system to system.

table_cache is related to max_connections. For example, for 200 concurrent running connections, you should have a table cache of at least 200 * n, where n is the maximum number of tables in a join.

The cache of open tables can grow to a maximum of table_cache (default 64; this can be changed with the -O table_cache=# option to mysqld). A table is never closed, except when the cache is full and another thread tries to open a table or if you use mysqladmin refresh or mysqladmin flush-tables.

When the table cache fills up, the server uses the following procedure to locate a cache entry to use:

Tables that are not currently in use are released, in least-recently-used order.
If the cache is full and no tables can be released, but a new table needs to be opened, the cache is temporarily extended as necessary.
If the cache is in a temporarily-extended state and a table goes from in-use to not-in-use state, the table is closed and released from the cache.
A table is opened for each concurrent access. This means that if you have two threads accessing the same table or access the table twice in the same query (with AS) the table needs to be opened twice. The first open of any table takes two file descriptors; each additional use of the table takes only one file descriptor. The extra descriptor for the first open is used for the index file; this descriptor is shared among all threads.

You can check if your table cache is too small by checking the mysqld variable opened_tables. If this is quite big, even if you haven't done a lot of FLUSH TABLES, you should increase your table cache.

MySQL - Drawbacks to Creating Large Numbers of Tables in the Same Database
If you have many files in a directory, open, close, and create operations will be slow. If you execute SELECT statements on many different tables, there will be a little overhead when the table cache is full, because for every table that has to be opened, another must be closed. You can reduce this overhead by making the table cache larger.

MySQL - Why So Many Open tables?
When you run mysqladmin status, you'll see something like this:

Uptime: 426 Running threads: 1 Questions: 11082 Reloads: 1 Open tables: 12

This can be somewhat perplexing if you only have 6 tables.

MySQL is multithreaded, so it may have many queries on the same table simultaneously. To minimize the problem with two threads having different states on the same file, the table is opened independently by each concurrent thread. This takes some memory and one extra file descriptor for the data file. The index file descriptor is shared between all threads.

How MySQL Uses Memory ?
The list below indicates some of the ways that the mysqld server uses memory. Where applicable, the name of the server variable relevant to the memory use is given:

The key buffer (variable key_buffer_size) is shared by all threads; Other buffers used by the server are allocated as needed.

Each connection uses some thread-specific space: A stack (default 64K, variable thread_stack), a connection buffer (variable net_buffer_length), and a result buffer (variable net_buffer_length). The connection buffer and result buffer are dynamically enlarged up to max_allowed_packet when needed. When a query is running, a copy of the current query string is also allocated.
All threads share the same base memory.
Only the compressed ISAM / MyISAM tables are memory mapped. This is because the 32-bit memory space of 4GB is not large enough for most big tables. When systems with a 64-bit address space become more common we may add general support for memory mapping.
Each request doing a sequential scan over a table allocates a read buffer (variable record_buffer).
All joins are done in one pass, and most joins can be done without even using a temporary table. Most temporary tables are memory-based (HEAP) tables. Temporary tables with a big record length (calculated as the sum of all column lengths) or that contain BLOB columns are stored on disk. One problem in MySQL versions before Version 3.23.2 is that if a HEAP table exceeds the size of tmp_table_size, you get the error The table tbl_name is full. In newer versions this is handled by automatically changing the in-memory (HEAP) table to a disk-based (MyISAM) table as necessary. To work around this problem, you can increase the temporary table size by setting the tmp_table_size option to mysqld, or by setting the SQL option SQL_BIG_TABLES in the client program.

In MySQL Version 3.20, the maximum size of the temporary table was record_buffer*16, so if you are using this version, you have to increase the value of record_buffer. You can also start mysqld with the --big-tables option to always store temporary tables on disk. However, this will affect the speed of many complicated queries.
Most requests doing a sort allocates a sort buffer and 0-2 temporary files depending on the result set size.

Almost all parsing and calculating is done in a local memory store. No memory overhead is needed for small items and the normal slow memory allocation and freeing is avoided. Memory is allocated only for unexpectedly large strings (this is done with malloc() and free()).
Each index file is opened once and the data file is opened once for each concurrently running thread. For each concurrent thread, a table structure, column structures for each column, and a buffer of size 3 * n is allocated (where n is the maximum row length, not counting BLOB columns). A BLOB uses 5 to 8 bytes plus the length of the BLOB data. The ISAM/MyISAM table handlers will use one extra row buffer for internal usage.
For each table having BLOB columns, a buffer is enlarged dynamically to read in larger BLOB values. If you scan a table, a buffer as large as the largest BLOB value is allocated.
Table handlers for all in-use tables are saved in a cache and managed as a FIFO. Normally the cache has 64 entries. If a table has been used by two running threads at the same time, the cache contains two entries for the table.

A mysqladmin flush-tables command closes all tables that are not in use and marks all in-use tables to be closed when the currently executing thread finishes। This will effectively free most in-use memory. ps and other system status programs may report that mysqld uses a lot of memory. This may be caused by thread-stacks on different memory addresses. For example, the Solaris version of ps counts the unused memory between stacks as used memory. You can verify this by checking available swap with swap -s. We have tested mysqld with commercial memory-leakage detectors, so there should be no memory leaks.

How MySQL Locks Tables ?
You can find a discussion about different locking methods in the appendix.

All locking in MySQL is deadlock-free. This is managed by always requesting all needed locks at once at the beginning of a query and always locking the tables in the same order.

The locking method MySQL uses for WRITE locks works as follows:

If there are no locks on the table, put a write lock on it. Otherwise, put the lock request in the write lock queue. The locking method MySQL uses for READ locks works as follows:

If there are no write locks on the table, put a read lock on it. Otherwise, put the lock request in the read lock queue. When a lock is released, the lock is made available to the threads in the write lock queue, then to the threads in the read lock queue.

This means that if you have many updates on a table, SELECT statements will wait until there are no more updates.

To work around this for the case where you want to do many INSERT and SELECT operations on a table, you can insert rows in a temporary table and update the real table with the records from the temporary table once in a while.

This can be done with the following code:

mysql> LOCK TABLES real_table WRITE, insert_table WRITE;
mysql> insert into real_table select * from insert_table;
mysql> TRUNCATE TABLE insert_table;
mysql> UNLOCK TABLES;

You can use the LOW_PRIORITY options with INSERT if you want to prioritize retrieval in some specific cases.

You could also change the locking code in `mysys/thr_lock.c' to use a single queue. In this case, write locks and read locks would have the same priority, which might help some applications.

MySQL - Table Locking Issues
The table locking code in MySQL is deadlock free.

MySQL uses table locking (instead of row locking or column locking) on all table types, except BDB tables, to achieve a very high lock speed. For large tables, table locking is MUCH better than row locking for most applications, but there are, of course, some pitfalls.

For BDB tables, MySQL only uses table locking if you explicitely lock the table with LOCK TABLES or execute a command that will modify every row in the table, like ALTER TABLE.

In MySQL Version 3.23.7 and above, you can insert rows into MyISAM tables at the same time other threads are reading from the table. Note that currently this only works if there are no holes after deleted rows in the table at the time the insert is made.

Table locking enables many threads to read from a table at the same time, but if a thread wants to write to a table, it must first get exclusive access. During the update, all other threads that want to access this particular table will wait until the update is ready.

As updates on tables normally are considered to be more important than SELECT, all statements that update a table have higher priority than statements that retrieve information from a table. This should ensure that updates are not 'starved' because one issues a lot of heavy queries against a specific table. (You can change this by using LOW_PRIORITY with the statement that does the update or HIGH_PRIORITY with the SELECT statement.)

Starting from MySQL Version 3.23.7 one can use the max_write_lock_count variable to force MySQL to temporary give all SELECT statements, that wait for a table, a higher priority after a specific number of inserts on a table.

Table locking is, however, not very good under the following senario:

A client issues a SELECT that takes a long time to run.
Another client then issues an UPDATE on a used table. This client will wait until the SELECT is finished.
Another client issues another SELECT statement on the same table. As UPDATE has higher priority than SELECT, this SELECT will wait for the UPDATE to finish. It will also wait for the first SELECT to finish!
A thread is waiting for something like full disk, in which case all threads that wants to access the problem table will also be put in a waiting state until more disk space is made available.
Some possible solutions to this problem are:

Try to get the SELECT statements to run faster. You may have to create some summary tables to do this.
Start mysqld with --low-priority-updates. This will give all statements that update (modify) a table lower priority than a SELECT statement. In this case the last SELECT statement in the previous scenario would execute before the INSERT statement. You can give a specific INSERT, UPDATE, or DELETE statement lower priority with the LOW_PRIORITY attribute.
Start mysqld with a low value for max_write_lock_count to give READ locks after a certain number of WRITE locks.
You can specify that all updates from a specific thread should be done with low priority by using the SQL command: SET SQL_LOW_PRIORITY_UPDATES=1.

You can specify that a specific SELECT is very important with the HIGH_PRIORITY attribute.

If you have problems with INSERT combined with SELECT, switch to use the new MyISAM tables as these support concurrent SELECTs and INSERTs.
If you mainly mix INSERT and SELECT statements, the DELAYED attribute to INSERT will probably solve your problems.

If you have problems with SELECT and DELETE, the LIMIT option to DELETE may help।

How MySQL uses DNS ?
When a new threads connects to mysqld, mysqld will span a new thread to handle the request. This thread will first check if the hostname is in the hostname cache. If not the thread will call gethostbyaddr_r() and gethostbyname_r() to resolve the hostname.

If the operating system doesn't support the above thread-safe calls, the thread will lock a mutex and call gethostbyaddr() and gethostbyname() instead. Note that in this case no other thread can resolve other hostnames that is not in the hostname cache until the first thread is ready.

You can disable DNS host lookup by starting mysqld with --skip-name-resolve. In this case you can however only use IP names in the MySQL privilege tables.

If you have a very slow DNS and many hosts, you can get more performance by either disabling DNS lookop with --skip-name-resolve or by increasing the HOST_CACHE_SIZE define (default: 128) and recompile mysqld.

You can disable the hostname cache with --skip-host-cache. You can clear the hostname cache with FLUSH HOSTS or mysqladmin flush-hosts.

If you don't want to allow connections over TCP/IP, you can do this by starting mysqld with --skip-networking.

MySQL - Get Your Data as Small as Possible
One of the most basic optimization is to get your data (and indexes) to take as little space on the disk (and in memory) as possible. This can give huge improvements because disk reads are faster and normally less main memory will be used. Indexing also takes less resources if done on smaller columns.

MySQL supports a lot of different table types and row formats. Choosing the right table format may give you a big performance gain.

You can get better performance on a table and minimize storage space using the techniques listed below:

Use the most efficient (smallest) types possible. MySQL has many specialized types that save disk space and memory.
Use the smaller integer types if possible to get smaller tables. For example, MEDIUMINT is often better than INT.
Declare columns to be NOT NULL if possible. It makes everything faster and you save one bit per column. Note that if you really need NULL in your application you should definitely use it. Just avoid having it on all columns by default.
If you don't have any variable-length columns (VARCHAR, TEXT, or BLOB columns), a fixed-size record format is used. This is faster but unfortunately may waste some space.

The primary index of a table should be as short as possible. This makes identification of one row easy and efficient. For each table, you have to decide which storage/index method to use.

Only create the indexes that you really need. Indexes are good for retrieval but bad when you need to store things fast. If you mostly access a table by searching on a combination of columns, make an index on them. The first index part should be the most used column. If you are ALWAYS using many columns, you should use the column with more duplicates first to get better compression of the index.
If it's very likely that a column has a unique prefix on the first number of characters, it's better to only index this prefix. MySQL supports an index on a part of a character column. Shorter indexes are faster not only because they take less disk space but also because they will give you more hits in the index cache and thus fewer disk seeks.

In some circumstances it can be beneficial to split into two a table that is scanned very often। This is especially true if it is a dynamic format table and it is possible to use a smaller static format table that can be used to find the relevant rows when scanning the table.

How MySQL Uses Indexes ?
Indexes are used to find rows with a specific value of one column fast. Without an index MySQL has to start with the first record and then read through the whole table until it finds the relevant rows. The bigger the table, the more this costs. If the table has an index for the colums in question, MySQL can quickly get a position to seek to in the middle of the data file without having to look at all the data. If a table has 1000 rows, this is at least 100 times faster than reading sequentially. Note that if you need to access almost all 1000 rows it is faster to read sequentially because we then avoid disk seeks.

All MySQL indexes (PRIMARY, UNIQUE, and INDEX) are stored in B-trees. Strings are automatically prefix- and end-space compressed.

Indexes are used to:

Quickly find the rows that match a WHERE clause.
Retrieve rows from other tables when performing joins.
Find the MAX() or MIN() value for a specific indexed column. This is optimized by a preprocessor that checks if you are using WHERE key_part_# = constant on all key parts < key_part1="10" column="constant" key_part1="const" key_part1="1"> SELECT * FROM tbl_name WHERE col1=val1 AND col2=val2;

If a multiple-column index exists on col1 and col2, the appropriate rows can be fetched directly. If separate single-column indexes exist on col1 and col2, the optimizer tries to find the most restrictive index by deciding which index will find fewer rows and using that index to fetch the rows.

If the table has a multiple-column index, any leftmost prefix of the index can be used by the optimizer to find rows. For example, if you have a three-column index on (col1,col2,col3), you have indexed search capabilities on (col1), (col1,col2), and (col1,col2,col3).

MySQL can't use a partial index if the columns don't form a leftmost prefix of the index. Suppose you have the SELECT statements shown below:

mysql> SELECT * FROM tbl_name WHERE col1=val1;
mysql> SELECT * FROM tbl_name WHERE col2=val2;
mysql> SELECT * FROM tbl_name WHERE col2=val2 AND col3=val3;

If an index exists on (col1,col2,col3), only the first query shown above uses the index. The second and third queries do involve indexed columns, but (col2) and (col2,col3) are not leftmost prefixes of (col1,col2,col3).
MySQL also uses indexes for LIKE comparisons if the argument to LIKE is a constant string that doesn't start with a wild-card character. For example, the following SELECT statements use indexes:
mysql> select * from tbl_name where key_col LIKE "Patrick%";
mysql> select * from tbl_name where key_col LIKE "Pat%_ck%";

In the first statement, only rows with "Patrick" <= key_col < "Patricl" are considered. In the second statement, only rows with "Pat" <= key_col < "Pau" are considered. The following SELECT statements will not use indexes: mysql> select * from tbl_name where key_col LIKE "%Patrick%";
mysql> select * from tbl_name where key_col LIKE other_col;

In the first statement, the LIKE value begins with a wild-card character. In the second statement, the LIKE value is not a constant.

Searching using column_name IS NULL will use indexes if column_name is an index.

MySQL normally uses the index that finds the least number of rows. An index is used for columns that you compare with the following operators: =,>, >=,>, >=, BETWEEN, and a LIKE with a non-wild-card prefix like 'something%'.

Any index that doesn't span all AND levels in the WHERE clause is not used to optimize the query. In other words: To be able to use an index, a prefix of the index must be used in every AND group.

The following WHERE clauses use indexes:

... WHERE index_part1=1 AND index_part2=2 AND other_column=3
... WHERE index=1 OR A=10 AND index=2 /* index = 1 OR index = 2 */
... WHERE index_part1='hello' AND index_part_3=5
/* optimized like "index_part1='hello'" */
... WHERE index1=1 and index2=2 or index1=3 and index3=3;
/* Can use index on index1 but not on index2 or index 3 */

These WHERE clauses do NOT use indexes:

... WHERE index_part2=1 AND index_part3=2 /* index_part_1 is not used */
... WHERE index=1 OR A=10 /* Index is not used in both AND parts */
... WHERE index_part1=1 OR index_part2=10 /* No index spans all rows */

Note that in some cases MySQL will not use an index, even if one would be available. Some of the cases where this happens are:

If the use of the index would require MySQL to access more than 30 % of the rows in the table. (In this case a table scan is probably much faster, as this will require us to do much fewer seeks). Note that if such a query uses LIMIT to only retrieve part of the rows, MySQL will use an index anyway, as it can much more quickly find the few rows to return in the result.

MySQL - Speed of Queries that Access or Update Data
First, one thing that affects all queries: The more complex permission system setup you have, the more overhead you get.

If you do not have any GRANT statements done, MySQL will optimize the permission checking somewhat. So if you have a very high volume it may be worth the time to avoid grants. Otherwise more permission check results in a larger overhead.

If your problem is with some explicit MySQL function, you can always time this in the MySQL client:

mysql> select benchmark(1000000,1+1);
+------------------------+
| benchmark(1000000,1+1) |
+------------------------+
| 0 |
+------------------------+
1 row in set (0.32 sec)

The above shows that MySQL can execute 1,000,000 + expressions in 0.32 seconds on a PentiumII 400MHz.

All MySQL functions should be very optimized, but there may be some exceptions, and the benchmark(loop_count,expression) is a great tool to find out if this is a problem with your query.


MySQL - Estimating Query Performance

In most cases you can estimate the performance by counting disk seeks. For small tables, you can usually find the row in 1 disk seek (as the index is probably cached). For bigger tables, you can estimate that (using B++ tree indexes) you will need: log(row_count) / log(index_block_length / 3 * 2 / (index_length + data_pointer_length)) + 1 seeks to find a row.

In MySQL an index block is usually 1024 bytes and the data pointer is usually 4 bytes. A 500,000 row table with an index length of 3 (medium integer) gives you: log(500,000)/log(1024/3*2/(3+4)) + 1 = 4 seeks.

As the above index would require about 500,000 * 7 * 3/2 = 5.2M, (assuming that the index buffers are filled to 2/3, which is typical) you will probably have much of the index in memory and you will probably only need 1-2 calls to read data from the OS to find the row.

For writes, however, you will need 4 seek requests (as above) to find where to place the new index and normally 2 seeks to update the index and write the row.

Note that the above doesn't mean that your application will slowly degenerate by N log N! As long as everything is cached by the OS or SQL server things will only go marginally slower while the table gets bigger। After the data gets too big to be cached, things will start to go much slower until your applications is only bound by disk-seeks (which increase by N log N). To avoid this, increase the index cache as the data grows.

MySQL - Speed of SELECT Queries ?
In general, when you want to make a slow SELECT ... WHERE faster, the first thing to check is whether or not you can add an index.

All references between different tables should usually be done with indexes.
You can use the EXPLAIN command to determine which indexes are used for a SELECT.

Some general tips:

To help MySQL optimize queries better, run myisamchk --analyze on a table after it has been loaded with relevant data. This updates a value for each index part that indicates the average number of rows that have the same value. (For unique indexes, this is always 1, of course.). MySQL will use this to decide which index to choose when you connect two tables with 'a non-constant expression'. You can check the result from the analyze run by doing SHOW INDEX FROM table_name and examining the Cardinality column.
To sort an index and data according to an index, use myisamchk --sort-index --sort-records=1 (if you want to sort on index 1). If you have a unique index from which you want to read all records in order according to that index, this is a good way to make that faster. Note, however, that this sorting isn't written optimally and will take a long time for a large table!

How MySQL Optimizes WHERE Clauses ?
The WHERE optimizations are put in the SELECT part here because they are mostly used with SELECT, but the same optimizations apply for WHERE in DELETE and UPDATE statements.

Also note that this section is incomplete. MySQL does many optimizations, and we have not had time to document them all.

Some of the optimizations performed by MySQL are listed below:

Removal of unnecessary parentheses:
((a AND b) AND c OR (((a AND b) AND (c AND d))))
-> (a AND b AND c) OR (a AND b AND c AND d)

Constant folding:
(a b>5 AND b=c AND a=5

Constant condition removal (needed because of constant folding):
(B>=5 AND B=5) OR (B=6 AND 5=50) OR (B=7 AND 5=6)
-> B=5 OR B=6

Constant expressions used by indexes are evaluated only once.
COUNT(*) on a single table without a WHERE is retrieved directly from the table information. This is also done for any NOT NULL expression when used with only one table.
Early detection of invalid constant expressions. MySQL quickly detects that some SELECT statements are impossible and returns no rows.
HAVING is merged with WHERE if you don't use GROUP BY or group functions (COUNT(), MIN()...).
For each sub-join, a simpler WHERE is constructed to get a fast WHERE evaluation for each sub-join and also to skip records as soon as possible.
All constant tables are read first, before any other tables in the query. A constant table is:
An empty table or a table with 1 row.
A table that is used with a WHERE clause on a UNIQUE index, or a PRIMARY KEY, where all index parts are used with constant expressions and the index parts are defined as NOT NULL.
All the following tables are used as constant tables:
mysql> SELECT * FROM t WHERE primary_key=1;
mysql> SELECT * FROM t1,t2
WHERE t1.primary_key=1 AND t2.primary_key=t1.id;

The best join combination to join the tables is found by trying all possibilities. If all columns in ORDER BY and in GROUP BY come from the same table, then this table is preferred first when joining.
If there is an ORDER BY clause and a different GROUP BY clause, or if the ORDER BY or GROUP BY contains columns from tables other than the first table in the join queue, a temporary table is created.
If you use SQL_SMALL_RESULT, MySQL will use an in-memory temporary table.
Each table index is queried, and the best index that spans fewer than 30% of the rows is used. If no such index can be found, a quick table scan is used.
In some cases, MySQL can read rows from the index without even consulting the data file. If all columns used from the index are numeric, then only the index tree is used to resolve the query.
Before each record is output, those that do not match the HAVING clause are skipped.
Some examples of queries that are very fast:

mysql> SELECT COUNT(*) FROM tbl_name;
mysql> SELECT MIN(key_part1),MAX(key_part1) FROM tbl_name;
mysql> SELECT MAX(key_part2) FROM tbl_name
WHERE key_part_1=constant;
mysql> SELECT ... FROM tbl_name
ORDER BY key_part1,key_part2,... LIMIT 10;
mysql> SELECT ... FROM tbl_name
ORDER BY key_part1 DESC,key_part2 DESC,... LIMIT 10;

The following queries are resolved using only the index tree (assuming the indexed columns are numeric):

mysql> SELECT key_part1,key_part2 FROM tbl_name WHERE key_part1=val;
mysql> SELECT COUNT(*) FROM tbl_name
WHERE key_part1=val1 AND key_part2=val2;
mysql> SELECT key_part2 FROM tbl_name GROUP BY key_part1;

The following queries use indexing to retrieve the rows in sorted order without a separate sorting pass:

mysql> SELECT ... FROM tbl_name ORDER BY key_part1,key_part2,..
। mysql> SELECT ... FROM tbl_name ORDER BY key_part1 DESC,key_part2 DESC,...

How MySQL Optimizes DISTINCT ?
DISTINCT is converted to a GROUP BY on all columns, DISTINCT combined with ORDER BY will in many cases also need a temporary table.

When combining LIMIT # with DISTINCT, MySQL will stop as soon as it finds # unique rows.

If you don't use columns from all used tables, MySQL will stop the scanning of the not used tables as soon as it has found the first match.

SELECT DISTINCT t1.a FROM t1,t2 where t1.a=t2.a;

In the case, assuming t1 is used before t2 (check with EXPLAIN), then MySQL will stop reading from t2 (for that particular row in t1) when the first row in t2 is found.

How MySQL Optimizes LEFT JOIN and RIGHT JOIN ?
A LEFT JOIN B in MySQL is implemented as follows:

The table B is set to be dependent on table A and all tables that A is dependent on.
The table A is set to be dependent on all tables (except B) that are used in the LEFT JOIN condition.
All LEFT JOIN conditions are moved to the WHERE clause.
All standard join optimizations are done, with the exception that a table is always read after all tables it is dependent on. If there is a circular dependence then MySQL will issue an error.
All standard WHERE optimizations are done.
If there is a row in A that matches the WHERE clause, but there wasn't any row in B that matched the LEFT JOIN condition, then an extra B row is generated with all columns set to NULL.
If you use LEFT JOIN to find rows that don't exist in some table and you have the following test: column_name IS NULL in the WHERE part, where column_name is a column that is declared as NOT NULL, then MySQL will stop searching after more rows (for a particular key combination) after it has found one row that matches the LEFT JOIN condition.
RIGHT JOIN is implemented analogously as LEFT JOIN.

The table read order forced by LEFT JOIN and STRAIGHT JOIN will help the join optimizer (which calculates in which order tables should be joined) to do its work much more quickly, as there are fewer table permutations to check.

Note that the above means that if you do a query of type:

SELECT * FROM a,b LEFT JOIN c ON (c.key=a.key) LEFT JOIN d (d.key=a.key) WHERE b.key=d.key

MySQL will do a full scan on b as the LEFT JOIN will force it to be read before d.

The fix in this case is to change the query to:

SELECT * FROM b,a LEFT JOIN c ON (c.key=a.key) LEFT JOIN d (d.key=a.key) WHERE b.key=d.key

How MySQL Optimizes LIMIT ?
In some cases MySQL will handle the query differently when you are using LIMIT # and not using HAVING:

If you are selecting only a few rows with LIMIT, MySQL will use indexes in some cases when it normally would prefer to do a full table scan.
If you use LIMIT # with ORDER BY, MySQL will end the sorting as soon as it has found the first # lines instead of sorting the whole table.
When combining LIMIT # with DISTINCT, MySQL will stop as soon as it finds # unique rows.
In some cases a GROUP BY can be resolved by reading the key in order (or do a sort on the key) and then calculate summaries until the key value changes. In this case LIMIT # will not calculate any unnecessary GROUP BY's.
As soon as MySQL has sent the first # rows to the client, it will abort the query.
LIMIT 0 will always quickly return an empty set. This is useful to check the query and to get the column types of the result columns.
The size of temporary tables uses the LIMIT # to calculate how much space is needed to resolve the query।

MySQL - Speed of INSERT Queries ?
The time to insert a record consists approximately of:

Connect: (3)
Sending query to server: (2)
Parsing query: (2)
Inserting record: (1 x size of record)
Inserting indexes: (1 x number of indexes)
Close: (1)
where the numbers are somewhat proportional to the overall time. This does not take into consideration the initial overhead to open tables (which is done once for each concurrently running query).

The size of the table slows down the insertion of indexes by N log N (B-trees).

Some ways to speed up inserts:

If you are inserting many rows from the same client at the same time, use multiple value lists INSERT statements. This is much faster (many times in some cases) than using separate INSERT statements.
If you are inserting a lot of rows from different clients, you can get higher speed by using the INSERT DELAYED statement.

Note that with MyISAM you can insert rows at the same time SELECTs are running if there are no deleted rows in the tables. When loading a table from a text file, use LOAD DATA INFILE. This is usually 20 times faster than using a lot of INSERT statements.

It is possible with some extra work to make LOAD DATA INFILE run even faster when the table has many indexes. Use the following procedure:
Optionally create the table with CREATE TABLE. For example, using mysql or Perl-DBI.
Execute a FLUSH TABLES statement or the shell command mysqladmin flush-tables.
Use myisamchk --keys-used=0 -rq /path/to/db/tbl_name. This will remove all usage of all indexes from the table.
Insert data into the table with LOAD DATA INFILE. This will not update any indexes and will therefore be very fast.
If you are going to only read the table in the future, run myisampack on it to make it smaller.

Re-create the indexes with myisamchk -r -q /path/to/db/tbl_name. This will create the index tree in memory before writing it to disk, which is much faster because it avoids lots of disk seeks. The resulting index tree is also perfectly balanced. Execute a FLUSH TABLES statement or the shell command mysqladmin flush-tables.
This procedure will be built into LOAD DATA INFILE in some future version of MySQL.
You can speed up insertions by locking your tables:
mysql> LOCK TABLES a WRITE;
mysql> INSERT INTO a VALUES (1,23),(2,34),(4,33);
mysql> INSERT INTO a VALUES (8,26),(6,29);
mysql> UNLOCK TABLES;

The main speed difference is that the index buffer is flushed to disk only once, after all INSERT statements have completed. Normally there would be as many index buffer flushes as there are different INSERT statements. Locking is not needed if you can insert all rows with a single statement. Locking will also lower the total time of multi-connection tests, but the maximum wait time for some threads will go up (because they wait for locks). For example:
thread 1 does 1000 inserts
thread 2, 3, and 4 does 1 insert
thread 5 does 1000 inserts

If you don't use locking, 2, 3, and 4 will finish before 1 and 5. If you use locking, 2, 3, and 4 probably will not finish before 1 or 5, but the total time should be about 40% faster. As INSERT, UPDATE, and DELETE operations are very fast in MySQL, you will obtain better overall performance by adding locks around everything that does more than about 5 inserts or updates in a row. If you do very many inserts in a row, you could do a LOCK TABLES followed by an UNLOCK TABLES once in a while (about each 1000 rows) to allow other threads access to the table. This would still result in a nice performance gain. Of course, LOAD DATA INFILE is much faster for loading data.
To get some more speed for both LOAD DATA INFILE and INSERT, enlarge the key buffer.

MySQL - Speed of UPDATE Queries ?
Update queries are optimized as a SELECT query with the additional overhead of a write. The speed of the write is dependent on the size of the data that is being updated and the number of indexes that are updated. Indexes that are not changed will not be updated.

Also, another way to get fast updates is to delay updates and then do many updates in a row later. Doing many updates in a row is much quicker than doing one at a time if you lock the table.

Note that, with dynamic record format, updating a record to a longer total length may split the record. So if you do this often, it is very important to OPTIMIZE TABLE sometimes.

MySQL - Speed of DELETE Queries ?
If you want to delete all rows in the table, you should use TRUNCATE TABLE table_name.
The time to delete a record is exactly proportional to the number of indexes. To delete records more quickly, you can increase the size of the index cache.

MySQL - Other Optimization Tips
Unsorted tips for faster systems:

Use persistent connections to the database to avoid the connection overhead. If you can't use persistent connections and you are doing a lot of new connections to the database, you may want to change the value of the thread_cache_size variable.

Always check that all your queries really use the indexes you have created in the tables. In MySQL you can do this with the EXPLAIN command.

Try to avoid complex SELECT queries on tables that are updated a lot. This is to avoid problems with table locking. The new MyISAM tables can insert rows in a table without deleted rows at the same time another table is reading from it. If this is important for you, you should consider methods where you don't have to delete rows or run OPTIMIZE TABLE after you have deleted a lot of rows.
Use ALTER TABLE ... ORDER BY expr1,expr2... if you mostly retrieve rows in expr1,expr2.. order. By using this option after big changes to the table, you may be able to get higher performance.
In some cases it may make sense to introduce a column that is 'hashed' based on information from other columns. If this column is short and reasonably unique it may be much faster than a big index on many columns. In MySQL it's very easy to use this extra column: SELECT * FROM table_name WHERE hash=MD5(concat(col1,col2)) AND col_1='constant' AND col_2='constant' For tables that change a lot you should try to avoid all VARCHAR or BLOB columns. You will get dynamic row length as soon as you are using a single VARCHAR or BLOB column.

It's not normally useful to split a table into different tables just because the rows gets 'big'. To access a row, the biggest performance hit is the disk seek to find the first byte of the row. After finding the data most new disks can read the whole row fast enough for most applications. The only cases where it really matters to split up a table is if it's a dynamic row size table (see above) that you can change to a fixed row size, or if you very often need to scan the table and don't need most of the columns.

If you very often need to calculate things based on information from a lot of rows (like counts of things), it's probably much better to introduce a new table and update the counter in real time. An update of type UPDATE table set count=count+1 where index_column=constant is very fast! This is really important when you use databases like MySQL that only have table locking (multiple readers / single writers). This will also give better performance with most databases, as the row locking manager in this case will have less to do.
If you need to collect statistics from big log tables, use summary tables instead of scanning the whole table. Maintaining the summaries should be much faster than trying to do statistics 'live'. It's much faster to regenerate new summary tables from the logs when things change (depending on business decisions) than to have to change the running application! If possible, one should classify reports as 'live' or 'statistical', where data needed for statistical reports are only generated based on summary tables that are generated from the actual data.
Take advantage of the fact that columns have default values. Insert values explicitly only when the value to be inserted differs from the default. This reduces the parsing that MySQL need to do and improves the insert speed. In some cases it's convenient to pack and store data into a blob. In this case you have to add some extra code in your appliction to pack/unpack things in the blob, but this may save a lot of accesses at some stage. This is practical when you have data that doesn't conform to a static table structure.
Normally you should try to keep all data non-redundant (what is called 3rd normal form in database theory), but you should not be afraid of duplicating things or creating summary tables if you need these to gain more speed.
Stored procedures or UDF (user-defined functions) may be a good way to get more performance. In this case you should, however, always have a way to do this some other (slower) way if you use some database that doesn't support this. You can always gain something by caching queries/answers in your application and trying to do many inserts/updates at the same time. If your database supports lock tables (like MySQL and Oracle), this should help to ensure that the index cache is only flushed once after all updates.
Use INSERT /*! DELAYED */ when you do not need to know when your data is written. This speeds things up because many records can be written with a single disk write.
Use INSERT /*! LOW_PRIORITY */ when you want your selects to be more important.
Use SELECT /*! HIGH_PRIORITY */ to get selects that jump the queue. That is, the select is done even if there is somebody waiting to do a write.
Use the multi-line INSERT statement to store many rows with one SQL command (many SQL servers supports this).
Use LOAD DATA INFILE to load bigger amounts of data. This is faster than normal inserts and will be even faster when myisamchk is integrated in mysqld.
Use AUTO_INCREMENT columns to make unique values.
Use OPTIMIZE TABLE once in a while to avoid fragmentation when using dynamic table format.

Use HEAP tables to get more speed when possible.

When using a normal Web server setup, images should be stored as files. That is, store only a file reference in the database. The main reason for this is that a normal Web server is much better at caching files than database contents. So it it's much easier to get a fast system if you are using files.
Use in memory tables for non-critical data that are accessed often (like information about the last shown banner for users that don't have cookies).
Columns with identical information in different tables should be declared identical and have identical names. Before Version 3.23 you got slow joins otherwise. Try to keep the names simple (use name instead of customer_name in the customer table). To make your names portable to other SQL servers you should keep them shorter than 18 characters.
If you need REALLY high speed, you should take a look at the low-level interfaces for data storage that the different SQL servers support! For example, by accessing the MySQL MyISAM directly, you could get a speed increase of 2-5 times compared to using the SQL interface. To be able to do this the data must be on the same server as the application, and usually it should only be accessed by one process (because external file locking is really slow). One could eliminate the above problems by introducing low-level MyISAM commands in the MySQL server (this could be one easy way to get more performance if needed). By carefully designing the database interface, it should be quite easy to support this types of optimization. In many cases it's faster to access data from a database (using a live connection) than accessing a text file, just because the database is likely to be more compact than the text file (if you are using numerical data), and this will involve fewer disk accesses. You will also save code because you don't have to parse your text files to find line and column boundaries. You can also use replication to speed things up.

Declaring a table with DELAY_KEY_WRITE=1 will make the updating of indexes faster, as these are not logged to disk until the file is closed। The downside is that you should run myisamchk on these tables before you start mysqld to ensure that they are okay if something killed mysqld in the middle. As the key information can always be generated from the data, you should not lose anything by using DELAY_KEY_WRITE.

MySQL - Using Your Own Benchmarks
You should definately benchmark your application and database to find out where the bottlenecks are. By fixing it (or by replacing the bottleneck with a 'dummy module') you can then easily identify the next bottleneck (and so on). Even if the overall performance for your application is sufficient, you should at least make a plan for each bottleneck, and decide how to solve it if someday you really need the extra performance.

For an example of portable benchmark programs, look at the MySQL benchmark suite.
You can take any program from this suite and modify it for your needs. By doing this, you can try different solutions to your problem and test which is really the fastest solution for you.

It is very common that some problems only occur when the system is very heavily loaded. We have had many customers who contact us when they have a (tested) system in production and have encountered load problems. In every one of these cases so far, it has been problems with basic design (table scans are NOT good at high load) or OS/Library issues. Most of this would be a LOT easier to fix if the systems were not already in production.

To avoid problems like this, you should put some effort into benchmarking your whole application under the worst possible load! You can use Sasha's recent hack for this - super-smack. As the name suggests, it can bring your system down to its knees if you ask it, so make sure to use it only on your development systems.

MySQL - Design Choices
MySQL keeps row data and index data in separate files. Many (almost all) other databases mix row and index data in the same file. We believe that the MySQL choice is better for a very wide range of modern systems.

Another way to store the row data is to keep the information for each column in a separate area (examples are SDBM and Focus). This will cause a performance hit for every query that accesses more than one column. Because this degenerates so quickly when more than one column is accessed, we believe that this model is not good for general purpose databases.

The more common case is that the index and data are stored together (like in Oracle/Sybase et al). In this case you will find the row information at the leaf page of the index. The good thing with this layout is that it, in many cases, depending on how well the index is cached, saves a disk read. The bad things with this layout are:

Table scanning is much slower because you have to read through the indexes to get at the data.
You can't use only the index table to retrieve data for a query.
You lose a lot of space, as you must duplicate indexes from the nodes (as you can't store the row in the nodes).
Deletes will degenerate the table over time (as indexes in nodes are usually not updated on delete).
It's harder to cache ONLY the index data.

MySQL Design Limitations/Tradeoffs
Because MySQL uses extremely fast table locking (multiple readers / single writers) the biggest remaining problem is a mix of a steady stream of inserts and slow selects on the same table.

We believe that for a huge number of systems the extremely fast performance in other cases make this choice a win. This case is usually also possible to solve by having multiple copies of the table, but it takes more effort and hardware.

We are also working on some extensions to solve this problem for some common application niches.

MySQL - Portability
Because all SQL servers implement different parts of SQL, it takes work to write portable SQL applications. For very simple selects/inserts it is very easy, but the more you need the harder it gets. If you want an application that is fast with many databases it becomes even harder!

To make a complex application portable you need to choose a number of SQL servers that it should work with.

You can use the MySQL crash-me program/web-page http://www.mysql.com/information/crash-me.php to find functions, types, and limits you can use with a selection of database servers. Crash-me now tests far from everything possible, but it is still comprehensive with about 450 things tested.

For example, you shouldn't have column names longer than 18 characters if you want to be able to use Informix or DB2.

Both the MySQL benchmarks and crash-me programs are very database-independent. By taking a look at how we have handled this, you can get a feeling for what you have to do to write your application database-independent. The benchmarks themselves can be found in the `sql-bench' directory in the MySQL source distribution. They are written in Perl with DBI database interface (which solves the access part of the problem).

See http://www.mysql.com/information/benchmarks.html for the results from this benchmark.

As you can see in these results, all databases have some weak points. That is, they have different design compromises that lead to different behavior.

If you strive for database independence, you need to get a good feeling for each SQL server's bottlenecks. MySQL is VERY fast in retrieving and updating things, but will have a problem in mixing slow readers/writers on the same table. Oracle, on the other hand, has a big problem when you try to access rows that you have recently updated (until they are flushed to disk). Transaction databases in general are not very good at generating summary tables from log tables, as in this case row locking is almost useless.

To get your application really database-independent, you need to define an easy extendable interface through which you manipulate your data. As C++ is available on most systems, it makes sense to use a C++ classes interface to the databases.

If you use some specific feature for some database (like the REPLACE command in MySQL), you should code a method for the other SQL servers to implement the same feature (but slower). With MySQL you can use the /*! */ syntax to add MySQL-specific keywords to a query. The code inside /**/ will be treated as a comment (ignored) by most other SQL servers.

If REAL high performance is more important than exactness, as in some Web applications, a possibility is to create an application layer that caches all results to give you even higher performance. By letting old results 'expire' after a while, you can keep the cache reasonably fresh. This is quite nice in case of extremely high load, in which case you can dynamically increase the cache and set the expire timeout higher until things get back to normal.

In this case the table creation information should contain information of the initial size of the cache and how often the table should normally be refreshed.

What Have We Used MySQL For?
During MySQL initial development, the features of MySQL were made to fit our largest customer. They handle data warehousing for a couple of the biggest retailers in Sweden.

From all stores, we get weekly summaries of all bonus card transactions, and we are expected to provide useful information for the store owners to help them find how their advertisement campaigns are affecting their customers.

The data is quite huge (about 7 million summary transactions per month), and we have data for 4-10 years that we need to present to the users. We got weekly requests from the customers that they want to get 'instant' access to new reports from this data.

We solved this by storing all information per month in compressed 'transaction' tables. We have a set of simple macros (script) that generates summary tables grouped by different criteria (product group, customer id, store ...) from the transaction tables. The reports are Web pages that are dynamically generated by a small Perl script that parses a Web page, executes the SQL statements in it, and inserts the results. We would have used PHP or mod_perl instead but they were not available at that time.

For graphical data we wrote a simple tool in C that can produce GIFs based on the result of a SQL query (with some processing of the result). This is also dynamically executed from the Perl script that parses the HTML files.

In most cases a new report can simply be done by copying an existing script and modifying the SQL query in it. In some cases, we will need to add more fields to an existing summary table or generate a new one, but this is also quite simple, as we keep all transactions tables on disk. (Currently we have at least 50G of transactions tables and 200G of other customer data.)

We also let our customers access the summary tables directly with ODBC so that the advanced users can themselves experiment with the data.

We haven't had any problems handling this with quite modest Sun Ultra SPARCstation (2x200 Mhz). We recently upgraded one of our servers to a 2 CPU 400 Mhz UltraSPARC, and we are now planning to start handling transactions on the product level, which would mean a ten-fold increase of data. We think we can keep up with this by just adding more disk to our systems.

We are also experimenting with Intel-Linux to be able to get more CPU power cheaper. Now that we have the binary portable database format (new in Version 3.23), we will start to use this for some parts of the application.

Our initial feelings are that Linux will perform much better on low-to-medium load and Solaris will perform better when you start to get a high load because of extreme disk IO, but we don't yet have anything conclusive about this. After some discussion with a Linux Kernel developer, this might be a side effect of Linux giving so much resources to the batch job that the interactive performance gets very low. This makes the machine feel very slow and unresponsive while big batches are going. Hopefully this will be better handled in future Linux Kernels.

What is the difference between mysql_fetch_array and mysql_fetch_object?
mysql_fetch_array — Fetch a result row as an associative ARRAY, a numeric array, or both
mysql_fetch_object — Fetch a result row as an OBJECT

What are the different table present in MYsql?
MyISAM: This is default. Based on Indexed Sequntial Access Method. The above SQL will create a MyISA table.
ISAM : same
HEAP : Fast data access, but will loose data if there is a crash. Cannot have BLOB, TEXT & AUTO INCRIMENT fields
BDB : Supports Transactions using COMMIT & ROLLBACK. Slower that others.
InoDB : same as BDB

No comments:

Post a Comment

Thanks to given comments.......

My Blog List