mardi 25 février 2014

Introduction to Job Queue daemon plugin

Dr. Adrian Partl is working in the E-Science group of the Leibniz Institute for Astrophysics Potsdam (AIP), where the key topics are cosmic magnetic fields and extragalactic, astrophysics is the branch of astronomy concerned with objects outside our own Milky Way galaxy

Why did you decided to create a Job Queue plugin, what issues does it solve?

A: Basically our MySQL databases hold astronomic simulations and observations content, the datasets are in multi Terra Bytes size and queries can take long time, astronomers can definitely wait for data acquisition, but jump on the data as soon as they are available.  Job Queue offer a protection from too many parallel query executions and prevent our servers to be spammed. Multiple queues are here to give us priority between users, today queries are executed as soon as a slot is available. Some timeouts per group can be define and queries will be killed passing that delay.

Would you like telling us more about your personal background?

A: I studied astronomy and have a PHD in astrophysics. For my PHD I focused on high performance computing by parallelizing a radiation transport simulation code to enable running it in large computational cluster. Now a day i'm more specialized in programming and managing big dataset. I stop doing scientists tasks, but i enjoy helping in making those publications happen by providing all the IT infrastructure for doing the job.

How did you came to MySQL ?

A: In the past we used SQL Server but we rapidly rich the performance limits of a single box, we found out that it can be very expensive to expend it for sharding.

We moved to MySQL and mostly MyISAM storage engine.  We are also using Spider storage engine since 3 years, for creating the shards. We needed true parallel queries, to do so we created PAQU a fork of Shard Query to better integrate with Spider, The map-reduce tasks in PaQu are all done by submitting multiple subsequent "direct background queries" to the Spider engine and we shortcut Gearman in shard-query. With this in place it is possible to manage map-reduce tasks using our Job Queue plugin.

S: Spider is now integrated in MariaDB10 and it is making fast improvements regarding map-reduce jobs, using UDF functions with multiple channels on partitions and for some simple aggregation query plans. Are you using advanced DBT3 big queries algorithms like BKA joins and MRR? Did you explore new engines like TokuDB that could bring massive compression, and disk IO saving to your dataset.

A: I will definitely have look at this. In the past we have experimented column stores, but it's not really adapted to what we do. Scientists extract all columns despite they don't use all of them. Better getting more, then to re extract :)       

When did you start working on Job Queue and how much time did it take? Did you found enough informations during the task of developing a plugin ? What was useful to you?

A: I took me one and a half year, i started by reading MySQL source code. Some books helped me, MySQL Internals from Sacha Pachev at Percona and MySQL plugins development from Sergei Golubchick at SkySQL and Andrew Hutchings at HP. Reading the source code of handler_socket plugin from Yoshinori Matsunobu definitely put me on faster track.

S: Yes we all miss Yoshinori but he is now more social than ever:), did you also search help from our public freenode IRC MariaDB channel.

A: Not at all, but i will visit knowing now about it.

How is the feedback from the community so far?

It did not yet pickup, but i ported the PgSphere API from PostgreSQL. The project is call mysql_sphere, it's still lacking indexes but it is fully functional and that project get so far very good feedback.

Any wishes to the core ?   

A: GiST index API like in PostgreSQL would be very nice to have, i have recently started a proxying storage engine to support multi dimensional R-Tree, but i would really like to add indexing on top of the existing storage engine.

S: ConnectDB made by Olivier Bertrand share the same requirements, to create  indexing proxy you still need to create a full engine for this, we support R-tree in InnoDB and MyISAM but this a valid point, we do not have functional indexes API like GiST. This has been already discuss internally but never been implemented.  

The results of the job execution are materialized in tables, can you force a storage engine for a job result ?  

A: This is not yet possible at the moment but easy to implement.

What OS and Forks are known to be working with Jog Queue?  

A: It’s not very deep tested because we mostly use it internally on linux and MySQL 5.5 and we have tested it on MariaDB recently, i don't see any reason why it would not work for other OS. Feedback are of course very welcome!

Do you plan to add features in upcoming release?

A: We don't really need additional features now a day, but we are open to any user requests.

S: Run some query on a scheduler ?

A: Can be done. I could allocate time if it make sense for users.  
Job Queue is part of a bigger project Daiquiri, using Gearmand can you elaborate?  

A: Yes Daiquiri is our PHP web framework for publication of datasets.This is manage by Dr. Jochen Klar and control dataset permissions and roles independently of the grants of MySQL. Job Queue is an optional component on top of it, for submitting jobs to multiple predefine dataset. We allow our users to enter free queries. Daiquiri is our front office for Paqu and Job Queue plugin. We are using Gearman in Daiquiri to dump user requests to CSV or into specialized data formats.

S: We have recently implemented Roles in MariaDB 10, you may enjoy this as well but for sure it may not feet all specific custom requirements.

Where can we learn more about Job Queue?  

mail: apartl@aip.de

S: Transporting MySQL and MariaDB to the space last frontier, there are few days like that one when i discovered your work making me proud to work for an Open Source company. Many thanks Adrian for your contributions!

S: If you found this plugin useful and would like to use it, tell it to our engineer team by voting to this Public Jira Task. If your share the same needs to have GiST like indexing API please vote for this Public Jira Task.  

mercredi 11 décembre 2013

MariaDB world record price per row 0.0000005$ on a single DELL R710

Don't look at an industry benchmark here, it's a real client story.

200 Billion records in a month and it should be transactional but not durable.

For regular workload we use LOAD DATA INFILE into partitioned InnoDB, but here we have estimated 15TB of RAID storage. This is a lot of disks and it can't no more stay inside a single server internal storage.

MariaDB 5.5 come with TokuDB storage engine for compression, but is it possible in the time frame impose by the workload?

We start benchmarking 380G of raw input data files,  6 Billion rows.

First let's check the compression with the dataset.


Great job my TokuDB 1/5, without tuning a single parameter other than durability! well i love you more every day my TokuDB.

My ex InnoDB, 30% compression missed in 8K, very bad compression ratio and slow insertion time. Don't worry InnoDB i still love you in memory :) 


Ok every love affair have a dark side :)



So now you can see that it works for 200 Billions rows because it give 277 hours of processing time at 200K insert/s.

In a month if we impose 12 hours, 6 days a week of processing with full capacity this is 288 hours.

That was very short, getting compression over 200 Billions records and without sharding will be hard.

Fortunately MariaDB 10 have native network partitioning using the spider contribution don't miss that.


vendredi 5 juillet 2013

MariaDB Storage Engine for CCM forum

CCM Benchmark is one of the leading forum provider on the web,  ROI is a major concern for them  and historically MyISAM was used on the forum replication cluster.  Reason is that MyISAM gave better ROI/performance on data that is hardly electable to cache mechanism.

This post is for MySQL users at scale,  if the number of servers or datacenter cost is not an issue for you, better get some more memory or flash storage and ou will found Lucifer server to demonstrate that your investment is not a lost of money or just migrate to Mongo.  

Quoting Damien Mangin, CTO at CCM "I like my data to be small, who want's to get to a post where the question is not popular and have no answer. Despite cleaning we still get more data than what commodity hardware memory can offer and storing all post in memory would be a major waste of money".

Like many other big web players at an other scale, Damien need to scale on disk not because it's good, but because you can catch more with less hardware. Doing this you need to control the cache missed at the level that you found acceptable and that give constant response time for your workload.

What data size do we get  retaining the most popular forum posts ?



Data
MyISAM
49G
TokuDB Fast
22G
InnoDB
80G
InnoDB 8K
50G
TokuDB Small
?
InnoDB 4K
?


What hardware do we have ?


PUMA : MariaDB 5.5 InnoDB 32G RAM

|__ LUCIFER : MariaDB 5.5 InnoDB compressed 8K 64G RAM

|__ GERTRUDE : MariaDB 5.5 MyISAM 32G RAM

|__ MYSQL1 : MariaDB 5.5 MyISAM 32G RAM

|__ MYSQL3 : MariaDB 5.5 TokuDB Fast 32G RAM


What are the top 10 queries, response time on each server ?

Q1


SELECT categorie, best_answer_id FROM ccmforum_index WHERE id=169328


No surprise here  that table is small and we notice that that TokuDB and InnoDB compression does not affect the response time of the queries.


Q2


SELECT id,message FROM ccmforum WHERE id IN(?,?,?,?,?)

In range of 1 to 5000 values in the IN clause.
This table is the big baby that generate RND IOps .



Interesting you get the raison here of why MyISAM is better than InnoDB at equal hardware on disk bound workload.

3 times better is something that matter as the second most frequent query.
We get almost equal performance for MyISAM(mysql1) and TokuDB(mysql3) knowing that TokuDB get all data in RAM and MyISAM 75% ; and InnoDB (puma) uncompressed 50%.


 Q3

SELECT parentx FROM uforums WHERE module="download" AND info_id=223




Q4


SELECT i.categorie,c.resume,c.title,count(i.categorie) AS nbFROM ccmforum_index i INNER JOIN ccmforum_cat c ON i.categorie=c.idWHERE i.parentx IN(32932,213290,2937,15002,13612,10016,154379,116397,79497,31886,4235,5038,5222,84819,81100,36025,8274,162824,10620,21731,12130,123360,232454) AND c.visibilite=0 AND c.acces=0
GROUP BY i.categorieORDER BY nb DESC


Q5



SELECT m.id,s.contribs,s.contribs_technique,p.devise,UNIX_TIMESTAMP(m.ts_create) AS date,p.photo,p.photo_etag,m.nick,UNIX_TIMESTAMP(s.ts_last_post) AS ts_last_post,p.siteperso AS website,(m.rang+1) AS level,m.contributeur AS contributor,m.blockedFROM commentcamarche.ccmmembres m INNER JOIN ccmforum_stats s ON m.id=s.id LEFT JOIN commentcamarche.ccmprofils p ON p.id=s.idWHERE m.id IN(1191274)




Q6

SELECT i.id,i.titre,i.auteur,UNIX_TIMESTAMP(i.date) AS date,i.membre,UNIX_TIMESTAMP(i.datex) AS datex,i.etat,i.categorie,i.parentx,i.member_id,i.reponses,i.dernier,i.dernier_membre,i.premier,i.premier_membre,UNIX_TIMESTAMP(i.datex) AS unix_datex,UNIX_TIMESTAMP(i.date) AS unix_date,0 AS view,i.appreciation FROM ccmforum_index i   WHERE i.categorie IN (2,105,10,111,108,106,110,109,107) AND i.etat!=0
ORDER BY i.datex DESC
LIMIT
2350,50


Q7

select sum(count) as cpt from ccmforum_count




Q8

SELECT id,nick FROM commentcamarche.ccmmembres WHERE nick="hyxo"



Q9

SELECT m.nick,m.mail,m.valid,s2.site_id AS id_site_create,
      UNIX_TIMESTAMP
(m.ts_create) AS ts_create,UNIX_TIMESTAMP(s.ts_last_post) AS ts_last_post,UNIX_TIMESTAMP(p.ts_last_edit) AS ts_last_edit,m.rang+1 AS level,m.contributeur AS contributor,m.following,m.followers,
      m
.signature,p.configuration AS config,p.domaines AS interest_areas,p.devise AS quote,p.bio,m.sexe AS gender,m.ville AS city,m.pays AS country,CONCAT(p.anniversaire_annee,'-',p.anniversaire_mois,'-',p.anniversaire_jour) AS birthdate,
      p
.siteperso AS website,m.newsletter AS optin_ccm,m.optin_part,m.`blocked`,m.messagerie AS accept_pm,m.notifications,p.photo AS picture,p.photo_etag AS picture_etag,m.domaine AS registration_domain,
      p
.date AS show_date,p.ville AS show_city, p.pays AS show_country, p.anniversaire AS show_birthdate, p.sexe AS show_gender,
      p
.email AS show_mail, LENGTH(p.siteperso) AS show_website,
      d
.job,d.company,d.biography,d.website AS websiteMD,d.twitter,d.facebook,d.linkedin,d.googleplus,d.firstname,d.lastnameFROM   commentcamarche.ccmmembres mLEFT JOIN commentcamarche.ccmprofils p ON p.id = m.idLEFT JOIN commentcamarche.ccmmembres_data d ON d.id = m.idLEFT JOIN ccmforum_stats s ON s.id = m.idINNER JOIN globals.sites s2 ON s2.domain=m.domaineWHERE  m.id=207360

Take away 

TokuDB proved identical MyISAM response time but being at least 2 time smaller on disk, we did not check InnoDB compression on 32G should be a more fair test but it was not the point as CCM have a server with memory to cover InnoDB fatness.

We notice that TokuDB like InnoDB does not bring count(*) query faster if data stay in the cache but TokuDB compression does not hurt the performance in all major queries.