eZ Community » Forums » Setup & design » Constantly corrupted innodb in...
expandshrink

Constantly corrupted innodb in nightly tasks

Constantly corrupted innodb in nightly tasks

Wednesday 19 September 2012 4:44:42 pm - 8 replies

Hi. we have been having a constant innodb corrupt issue. I really think its a EZ issue from a nightly cronjob as this issue appears in the morning after the cronjobs habe ran. 

We have other sites in this server with EZ Publish with no issues at all with the DB.

What cronjob or part of the cronjob or what could be causing this probelm? execution times? memory?

We have EZ 4.4

MySQLi in 5.5

Thursday 20 September 2012 11:25:23 am

Hi Luis,

What are the cronjob parts you're running in your cronjob?

Can you replicate this corruption on another server?

Can you run the cronjob parts individually on the command line on another server to find which one is causing the corruption (if at all)?

What sort of corruption are you experiencing?

Cheers,
Geoff 

Thursday 20 September 2012 1:07:59 pm

HI.

innodb is pretty robust. Maybe something in eZ Publish triggers the issue(high IO or transactions). However, the solution may not be in changing eZ Publish, but tuning innodb for use with larger datasets and (more specifically) large transactions.

Some innodb suggestions:

***read the caution at the end about changing innodb settings before mucking with them***

  • You can get corruption if the the innodb log file size is is too small and you have large transactions. The fix for this is to increase the innodb_log_file_size to maybe 100-250 meg at least
  • innodb is robust, but in this, it relies on the OS to tell it if things are done right [fsync()].. So, under high load, there are hardware issues that cound cause the corruption:
    • RAID array that has writeback cache enabled and then there is some write issue (says that the sync is done when it is not)
    • some NAS solutions becuse innodb may think that there is a sync when there is not.

 

So, the quick 'n dirty would be to at least consider the innodb setting above.  After that, there are some other innodb settings that may help

 

** Now the words of caution:

  • - startup, shutdown and recovery of innodb has a direct correlation to  innodb_log_file_size.  Don't go crazy here or you'll be sitting around till 2017 if things have to be rebuilt
  • - backup your log files, etc
  • test on a separate machine if possible - don't just blindly change some settings for innodb tuning (like changing file_per_table) as this won't work and your mysql instance will not come back online or maybe worse.

Thursday 20 September 2012 11:46:03 pm

Besides ez cronjobs, take care od db backups, and, if running on vms, of snapshots which might have been scheduled by sysadmins.

These tasks generally make heavy IO usage, which sometimes causes problems to databases...

...I could tell you horror stories of e.g. an MS SqlServer that would corrupt the HDD every time a backup was started. Talk about securing your data!

Wednesday 26 September 2012 9:03:16 pm

Quote from Gaetano Giunta :

Besides ez cronjobs, take care od db backups, and, if running on vms, of snapshots which might have been scheduled by sysadmins.

These tasks generally make heavy IO usage, which sometimes causes problems to databases...

...I could tell you horror stories of e.g. an MS SqlServer that would corrupt the HDD every time a backup was started. Talk about securing your data!

Gaetano. I think this is the path... completly agree with you. We were between backups and cronjobs...

Wednesday 26 September 2012 9:06:26 pm

Quote from Geoff Bentley :

Hi Luis,

What are the cronjob parts you're running in your cronjob?

Can you replicate this corruption on another server?

Can you run the cronjob parts individually on the command line on another server to find which one is causing the corruption (if at all)?

What sort of corruption are you experiencing?

Cheers,
Geoff 

The cronjob runs all default tasks every 24 hrs. We then run ez flow cronjobs. Thats it... The only one that runs every 24 hours at around 3-4 am is the daily cronjob part (the default cronjobs). Which, by the way, do not have any important tasks at least for this site.

Wednesday 26 September 2012 9:07:30 pm

Quote from David Ennis :

HI.

innodb is pretty robust. Maybe something in eZ Publish triggers the issue(high IO or transactions). However, the solution may not be in changing eZ Publish, but tuning innodb for use with larger datasets and (more specifically) large transactions.

Some innodb suggestions:

***read the caution at the end about changing innodb settings before mucking with them***

  • You can get corruption if the the innodb log file size is is too small and you have large transactions. The fix for this is to increase the innodb_log_file_size to maybe 100-250 meg at least
  • innodb is robust, but in this, it relies on the OS to tell it if things are done right [fsync()].. So, under high load, there are hardware issues that cound cause the corruption:
    • RAID array that has writeback cache enabled and then there is some write issue (says that the sync is done when it is not)
    • some NAS solutions becuse innodb may think that there is a sync when there is not.

 

So, the quick 'n dirty would be to at least consider the innodb setting above.  After that, there are some other innodb settings that may help

 

** Now the words of caution:

  • - startup, shutdown and recovery of innodb has a direct correlation to  innodb_log_file_size.  Don't go crazy here or you'll be sitting around till 2017 if things have to be rebuilt
  • - backup your log files, etc
  • test on a separate machine if possible - don't just blindly change some settings for innodb tuning (like changing file_per_table) as this won't work and your mysql instance will not come back online or maybe worse.

David This recommendations will definetly be implemented. Thanks a lot. This is awesome. I think I am about to have some sleep... happy.gif Emoticon

Thursday 27 September 2012 9:09:21 am

Quote from Luis D Garcia :
Quote from David Ennis :

HI.

innodb is pretty robust. Maybe something in eZ Publish triggers the issue(high IO or transactions). However, the solution may not be in changing eZ Publish, but tuning innodb for use with larger datasets and (more specifically) large transactions.

Some innodb suggestions:

***read the caution at the end about changing innodb settings before mucking with them***

  • You can get corruption if the the innodb log file size is is too small and you have large transactions. The fix for this is to increase the innodb_log_file_size to maybe 100-250 meg at least
  • innodb is robust, but in this, it relies on the OS to tell it if things are done right [fsync()].. So, under high load, there are hardware issues that cound cause the corruption:
    • RAID array that has writeback cache enabled and then there is some write issue (says that the sync is done when it is not)
    • some NAS solutions becuse innodb may think that there is a sync when there is not.

 

So, the quick 'n dirty would be to at least consider the innodb setting above.  After that, there are some other innodb settings that may help

 

** Now the words of caution:

  • - startup, shutdown and recovery of innodb has a direct correlation to  innodb_log_file_size.  Don't go crazy here or you'll be sitting around till 2017 if things have to be rebuilt
  • - backup your log files, etc
  • test on a separate machine if possible - don't just blindly change some settings for innodb tuning (like changing file_per_table) as this won't work and your mysql instance will not come back online or maybe worse.

David This recommendations will definetly be implemented. Thanks a lot. This is awesome. I think I am about to have some sleep... happy.gif Emoticon

No Problem.

 

I hope it helps.

 

-David

Monday 22 October 2012 5:27:36 pm

OK just to close this one as it should. 

The thing was ANYTHING, even looking atthe server may corrupt the DB... backups, cronjobs, anything...

So we fixed and dumped the DB and rebuild it from scratch. We in fact rebuild all innodb completly and made what David recommended about innodb_log_file_size (made it bigger, not above 100 because MySQL recommends it to be smaller). We then test it making double backups and running cronjobs all at the same time several times. Nothing happened, no errors, we have had high traffic (+1500 concurrent) no errors in logs.

The solution is making a dump and rebuilding the DB from scratch. Thats it.

 

Thanks you all guys!

expandshrink

You must be logged in to post messages in this topic!

36 542 Users on board!

Forums menu

Proudly Developed with from