首页 > IT知识 > 如何针对具体的SQL做优化?

如何针对具体的SQL做优化?

2017年09月13日13:46:21来源:海文国际         91
分享到:
IT知识文章: 

如何针对具体的SQL做优化?


使用Explain分析SQL语句执行计划

MySQL>explainselect*fromt_online_group_recordswhereUNIX_TIMESTAMP(gre_updatetime)>123456789;

+----+-------------+------------------------+------+---------------+------+---------+------+------+-------------+

|id|select_type|table|type|possible_keys|key|key_len|ref|rows|Extra|

+----+-------------+------------------------+------+---------------+------+---------+------+------+-------------+

|1|SIMPLE|t_online_group_records|ALL|NULL|NULL|NULL|NULL|47|Usingwhere|

+----+-------------+------------------------+------+---------------+------+---------+------+------+-------------+

1rowinset(0.00sec)

如上面例子所示,重点关注下type,rows和Extra:

type:使用类别,有无使用到索引。结果值从好到坏:…>range(使用到索引)>index>ALL(全表扫描),一般查询应达到range级别

rows:SQL执行检查的记录数

Extra:SQL执行的附加信息,如”Usingindex”表示查询只用到索引列,不需要去读表等

使用Profiles分析SQL语句执行时间和消耗资源

MySQL>setprofiling=1;(启动profiles,默认是没开启的)

MySQL>selectcount(1)fromt_online_group_recordswhereUNIX_TIMESTAMP(gre_updatetime)>123456789;(执行要分析的SQL语句)

MySQL>showprofiles;

+----------+------------+----------------------------------------------------------------------------------------------+

|Query_ID|Duration|Query|

+----------+------------+----------------------------------------------------------------------------------------------+

|1|0.00043250|selectcount(1)fromt_online_group_recordswhereUNIX_TIMESTAMP(gre_updatetime)>123456789|

+----------+------------+----------------------------------------------------------------------------------------------+

1rowinset(0.00sec)

MySQL>showprofilecpu,blockioforquery1;(可看出SQL在各个环节的耗时和资源消耗)

+----------------------+----------+----------+------------+--------------+---------------+

|Status|Duration|CPU_user|CPU_system|Block_ops_in|Block_ops_out|

+----------------------+----------+----------+------------+--------------+---------------+

...

|optimizing|0.000016|0.000000|0.000000|0|0|

|statistics|0.000020|0.000000|0.000000|0|0|

|preparing|0.000017|0.000000|0.000000|0|0|

|executing|0.000011|0.000000|0.000000|0|0|

|Sendingdata|0.000076|0.000000|0.000000|0|0|

...

SQL优化的技巧(只提一些业务常遇到的问题)

最关键:索引,避免全表扫描。

对接触的项目进行慢查询分析,发现TOP10的基本都是忘了加索引或者索引使用不当,如索引字段上加函数导致索引失效等(如whereUNIX_TIMESTAMP(gre_updatetime)>123456789)

+----------+------------+---------------------------------------+

|Query_ID|Duration|Query|

+----------+------------+---------------------------------------+

|1|0.00024700|select*frommytablewhereid=100|

|2|0.27912900|select*frommytablewhereid+1=101|

+----------+------------+---------------------------------------+

另外很多同学在拉取全表数据时,喜欢用selectxxfromxxlimit5000,1000这种形式批量拉取,其实这个SQL每次都是全表扫描,建议添加1个自增id做索引,将SQL改为selectxxfromxxwhereid>5000andid;

+----------+------------+-----------------------------------------------------+

|Query_ID|Duration|Query|

+----------+------------+-----------------------------------------------------+

|1|0.00415400|select*frommytablewhereid>=90000andid91000|

|2|0.10078100|select*frommytablelimit90000,1000|

+----------+------------+-----------------------------------------------------+

合理用好索引,应该可解决大部分SQL问题。当然索引也非越多越好,过多的索引会影响写操作性能

只select出需要的字段,避免select

+----------+------------+-----------------------------------------------------+

|Query_ID|Duration|Query|

+----------+------------+-----------------------------------------------------+

|1|0.02948800|selectcount(1)from(selectidfrommytable)a|

|2|1.34369100|selectcount(1)from(select*frommytable)a|

+----------+------------+-----------------------------------------------------+

尽量早做过滤,使Join或者Union等后续操作的数据量尽量小

把能在逻辑层算的提到逻辑层来处理,如一些数据排序、时间函数计算等

…….

PS:关于SQL优化,已经有足够多文章了,所以就不讲太全面了,只重点说自己1个感受:索引!基本都是因为索引!

4.SQL层面已难以优化,请求量继续增大时的应对策略?

下面是我能想到的几个方法,

分库分表

使用集群(master-slave),读写分离

增加业务的cache层

使用连接池

南京海文国际专注于Java培训,Web前端培训,UI设计培训,Oracle培训,Oracle认证,大数据培训,高薪就业!学员毕业就职于阿里,IBM,惠普等知名企业,是一家全国高端IT培训机构。