Tags Filter: innodb (reset)
posted by Wane cooper
on Mon 04 Jan 2010 12:23 UTC
posted by Lenz Grimmer
on Tue 27 Oct 2009 08:07 UTC
An introduction on how to use XtraBackup to perform hot backups of MySQL
posted by Sebastien Dejean
on Thu 01 Oct 2009 21:55 UTC
A quick look at some strong points between InnoDB Hot Backup and Xtrabackup
posted by Sheeri Cabral
on Wed 23 Sep 2009 20:07 UTC
Presented at MySQL Camp 2009 by Ewen Fortune of Percona.
XtraDB is a storage engine for MySQL based on the InnoDB storage engine, designed
to better scale on modern hardware, and including a variety of other features useful in high performance environments. It is fully backwards compatible, and so can be used as a drop-in replacement for standard InnoDB.
XtraDB includes all of InnoDBs robust, reliable ACID-compliant design and advanced MVCC architecture, and builds on that solid foundation with more features, more tunability, more metrics, and more scalability. In
particular, it is designed to scale better on many cores, to use memory more efficiently, and to be more convenient and useful. The new features are especially designed to alleviate some of InnoDBs limitations. We will talk about current status of XtraDB and directions of development.
posted by Sheeri Cabral
on Tue 22 Sep 2009 20:31 UTC
InnoDB Database Recovery Techniques
Peter Zaitsev (Percona)
Presented at the 2009 MySQL Camp by Peter Zaitsev of Percona.
Description:
Have you ever had Innodb database corrupted or have deleted data accidentally and want it back ? This session will go through various approaches you can use to get most of your data back using MySQL build in features as well as third party open source tool.
This session speaks about Innodb database recovery techniques (apart from recovering from back).
First we will discuss various types of Innodb corruption and data loss scenarios ranging from user error to hardware failures.
Then we will look at Innodb storage data structure to see what foundations does it has for corruption discovery and recovery.
Then we will go into approaches one can use to recover data including:
* Recovering Innodb dictionary running our of sync with .idb files
* Recovering minor
[]
posted by Sheeri Cabral
on Tue 22 Sep 2009 20:31 UTC
Crash Recovery and Media Recovery in InnoDB
Heikki Tuuri (Innobase / Oracle Corp.)
From the official conference page at
http://www.mysqlconf.com/mysql2009/public/schedule/detail/6843
InnoDB provides a reliable, true-and-tested automatic crash recovery for MySQL database users. It is based on a redo log and write-ahead logging, two techniques that are used in most of the leading database engines in the world. We will explain these techniques in depth, and present some benchmarks of the crash recovery time.
InnoDBs media recovery is done from a (hot) backup of a database, using the MySQL binlog to replay the transactions just as they happened originally. We will explain the difference between MySQLs statement-based and row-based replication and binlogging, and how they should be used to ensure a successful replay of transactions.
posted by Sheeri Cabral
on Fri 11 Sep 2009 08:23 UTC
"There's a change of behaviour in MySQL 5.1.31 for Row Based Replication, if you have InnoDB transactions that also write to a MyISAM (or other non-transactional engine) table."
"the moral of the story. Don't use non-transactional tables in the middle of a transaction. Ever. You will only cause yourself more pain than you can possibly imagine. Instead, move the writes to the non-transactional tables outside of the transaction."
posted by Sakila The Librarian
on Mon 07 Sep 2009 12:11 UTC
I was surprised to find out that Innobase's Innobackup script did not read the socket file location from the "my.cnf" file. So I added in as a feature.A few notes:\tThese modifications are against version 1.4.0 of the script. A quick search for the "socket" keyword in the 1.5.0 version of the script did not reveal any simillar functionality to what I'm proposing here.\tThese modifications have not actually bee
posted by Sakila The Librarian
on Mon 07 Sep 2009 12:11 UTC
The unsung heroes of InnoDB are the logfiles. They are what makes InnoDB automatic crash recovery possible.Database administrators of other DBMS may be familiar with the concept of a "redo" log. When data is changed, affected data pages are changed in the innodb_buffer_pool. Then, the change is written to the redo log, which in MySQL is the InnoDB logfile (ib_logfile0 and ib_logfile1). The pages are marked as "dirty", and eventually get flushed and written to disk.If MySQL crashes, t
posted by Sakila The Librarian
on Mon 07 Sep 2009 12:11 UTC
Recently, a customer wondered if they should start using the "innodb_file_per_table" option, or if they should continue to use the large InnoDB tablespace files created by the "innodb_data_file_path" option in the "my.cnf" option file.Many people still use the older "innodb_data_file_path" option because it is the default for MySQL server. So, what are the benefits of using "innodb_file_per_table" instead?The "innodb_file_per_table" makes for easier-to-manage files. With this option each InnoDB table has its own data and index file under the database directory. As an example, if you had table "foo" located in database "xyz" the InnoDB data file for table "foo" would be "/var/lib/mysql/data/xyz/foo.idb". Each table would have its own idb table in the appropriate database directory. This is in contrast to using the
[]
posted by Sakila The Librarian
on Mon 07 Sep 2009 12:11 UTC
InnoDB is a storage engine that uses MVCC (described shortly) to provide ACID-compliant transactional data storage using row-level locking. MVCC stands for Multi-Version Concurrency Control. It is how InnoDB allows multiple transactions to look at a data set of one or more tables and have a consistent view of the data. MVCC keeps a virtual snapshot of the dataset for each transaction. An example will make this clear. Let's assume you have two transactions (and only two transactions) running on a system. If transaction A starts at 10:45:56 and ends at 10:45:89, it gets a consistent view of the dataset during the time that the transaction runs. If transaction B starts at 10:45:65, it would see exactly the same view of the dataset that transaction A saw when it began the transaction. If transaction B started at 10:45:95, it would see the modified dataset after transaction A made modifications. During the duration of each
[]
posted by Sakila The Librarian
on Mon 07 Sep 2009 12:11 UTC
It's time to continue our series on the transactional storage engines for MySQL. Some might question why I even include Falcon because it is very much beta at this time. MySQL, however, has made quite an investment into Falcon, and while it is currently beta, the code is improving and it looks like that it will be production-worthy when MySQL server 6.0 hits GA.If this is the case, it is important to begin to understand what Falcon was designed for and how it differs from other transactional engines such as InnoDB. I am going to concentrate quite a bit on the Falcon/InnoDB comparison as that is what everyone wants to talk about. This is despite my having heard MySQL employees repeatedly make statements to the effect of, "Falcon is not going to replace InnoDB," or "Falcon is not competing with InnoDB." Well, take that with a grain of salt. It certainly seems to me that they are competing for the same
[]
posted by Sakila The Librarian
on Mon 07 Sep 2009 12:11 UTC
Or, There is a Use for GUI Tools!Recently I was working on a problem for a customer. They are converting a server with two InnoDB data files to "innodb_file_per_table":$ ls -l ibdata*-rwxr-xr-x 1 mysql mysql 2.0G Oct 14 13:10 ibdata1-rwxr-xr-x 1 mysql mysql 350G Oct 14 13:10 ibdata2I honestly don't recall ever seeing (or hearing about) so large a data file.The method chosen for conversion boils down to this:\tstop and start the server to enable "innodb_file_per_table"\talter all tables to myisam\tstop server and delete ibdata file\trestart server\tconvert tables back to InnoDB\tadd foreign keysThis post isn't about how we did it, or about whether or not it was the best way, or anything like that. This post is really about the the last two steps.There were over 500 InnoDB tables and something like 375 foreign keys. So I needed to somehow create the "ALTER TABLE" statements for
[]
posted by Sakila The Librarian
on Mon 07 Sep 2009 12:11 UTC
Do you have a master-slave MySQL set up? Ever do DDL changes on the master? You may be hit with a serious data integrity bug. Read on.One of our clients does a regular "rename tables" on the master to keep the current table small and archive off old data. We'd occasionally be hit by a 'duplicate key' error on the slave on the current table and have to resolve it manually. Digging into the issue, I managed to replicate it on demand and filed bug 39675 with MySQL, which subsequently has been verified and slated for fix, though from what it seems only in version 6.0. The bug affects all versions of MySQL from 4.1 to 6.0.In a nutshell, here is what happens. The "rename tables" command only checks for pending transactions or locks in the current session. If there is a pending transaction in another session on the table being renamed, the rename will succeed, but the order in which the transaction is written
[]
posted by Sakila The Librarian
on Mon 07 Sep 2009 12:11 UTC
I was doing some research over the weekend on how transactions work "under the hood" in MySQL. I thought it might be enlightening if I wrote about what I found.The database system must resolve the situation where, for whatever reason, the database server stops working in the middle of a transaction. Perhaps the power goes out, the hard drive crashes, or the CPU disappears in a cloud of smoke. Maybe the MySQL Server itself crashes for some reason. What does the MySQL Server do when operation resumes? Potentially, there are transactions in an inconsistent state. How are inconsistent transactions resolved? They are resolved through a recovery process involving log files that are called transactional logs. There are two kinds of transactional logs: redo logs and undo logs.
posted by Sakila The Librarian
on Mon 07 Sep 2009 12:11 UTC
Recently while we were building a slave with a newer version of MySQL 5.1 from an InnoDB Hot backup, the following error occurred when we ran "mysql_upgrade":mysql.time_zone OKmysql.time_zone_leap_second OKmysql.time_zone_name OKmysql.time_zone_transition OKmysql.time_zone_transition_type OKmysql.user OKRunning 'mysql_fix_privilege_tables'...ERROR 13 (HY000) at line 311: Can't get stat of './mysql/general_log.CSV' (Errcode: 2)ERROR 13 (HY000) at line 316: Can't get stat of './mysql/slow_log.CSV' (Errcode: 2)FATAL ERROR: Upgrade failedThe problem is that in MySQL 5.1, it is possible to log the slow query log and general log to tables in the "mysql" schema (source: Selecting General Query and Slow Query Log Output Destinations). These tables are
[]
posted by Sakila The Librarian
on Mon 07 Sep 2009 12:11 UTC
A short time ago I posted how I was Using XtraDB Backup to backup InnoDB. Overall, the blog post was positive, but experiences that others have had (and commented to on that blog post) have made me want to put out another short article about using XtraDB backup.The first few points remain the same -- the backup process is stable, we were able to use the binaries without compiling, and using Innobackupex as the wrapper script, analogous to Innobackup.pl. However, we did figure out why Xtrabackup had to be run as the mysql user:Xtrabackup writes to the data dictionary file (ibdata1, for example). We have not examined if it also writes to the data and index files (still ibdata1 by default, or the .ibd files when using innodb_file_per_table). [EDIT: The authors of Xtrabackup have commented below as to why the write occurs:xtrabackup is kind of very small and restricted instance of InnoDB, and reuses a lot of InnoDB code.InnoDB by
[]
posted by Sakila The Librarian
on Mon 07 Sep 2009 12:11 UTC
MySQL's manual page for InnoDB's adaptive hash states: If a table fits almost entirely in main memory, the fastest way to perform queries on it is to use hash indexes. InnoDB has a mechanism that monitors index searches made to the indexes defined for a table. If InnoDB notices that queries could benefit building a hash index, it does so automatically.Note that the hash index is always built base
posted by Sakila The Librarian
on Mon 07 Sep 2009 12:11 UTC
It felt like the right time for us to look back at some useful commands for table maintenance that some of us may not have mastered as much as we might like to think.In my post about gathering index statistics, I referred to "OPTIMIZE TABLE", "ANALYZE TABLE", and "REPAIR TABLE" -- but I never explained in depth what the different commands do, and what the differences between them are. That is what I thought I would do with this post, focusing on InnoDB and MyISAM, and the differences in how they treat those commands. I will also look at different cases and see which one is right for in each case.
posted by Sakila The Librarian
on Mon 07 Sep 2009 12:11 UTC
In two words: online operations. In a paragraph: Forget partitioning, row-based replication and events. The big reasons most people are going to salivate over 5.1, and probably start plans to upgrade now, are the online operations: \tonline ALTER TABLE for column rename, column default value change, and adding values to the end of an ENUM/SET\tOnline, table-based logging. No more need to restart your server to enable or change the general or slow query logs. You can have the standard file-based output or choose a table format...which you can query.