pg11
select count 没可以并行
sdktest=> explain(analyze,verbose,costs) select count(*) from t_l_game_login_log;
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Finalize Aggregate (cost=949099.50..949099.51 rows=1 width=8) (actual time=1926.216..1926.217 rows=1 loops=1)
Output: count(*)
-> Gather (cost=949099.46..949099.47 rows=12 width=8) (actual time=1923.655..1967.013 rows=13 loops=1)
Output: (PARTIAL count(*))
Workers Planned: 12
Workers Launched: 12
-> Partial Aggregate (cost=948099.46..948099.47 rows=1 width=8) (actual time=1856.612..1856.613 rows=1 loops=13)
Output: PARTIAL count(*)
Worker 0: actual time=1849.929..1849.929 rows=1 loops=1
Worker 1: actual time=1855.286..1855.287 rows=1 loops=1
Worker 2: actual time=1848.387..1848.387 rows=1 loops=1
Worker 3: actual time=1849.642..1849.642 rows=1 loops=1
Worker 4: actual time=1849.865..1849.865 rows=1 loops=1
Worker 5: actual time=1849.815..1849.816 rows=1 loops=1
Worker 6: actual time=1859.493..1859.493 rows=1 loops=1
Worker 7: actual time=1855.420..1855.420 rows=1 loops=1
Worker 8: actual time=1849.899..1849.900 rows=1 loops=1
Worker 9: actual time=1849.641..1849.641 rows=1 loops=1
Worker 10: actual time=1849.872..1849.872 rows=1 loops=1
Worker 11: actual time=1849.886..1849.887 rows=1 loops=1
不是 count 就不行了...
sdktest=> explain(analyze,verbose,costs) select time_bucket('1 days', count_time) as day1, COUNT(id) from t_l_game_login_log where count_time > now() - interval '60 day' group by day1, app_id;
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
HashAggregate (cost=807049.04..807172.99 rows=9916 width=20) (actual time=12512.462..12513.110 rows=2755 loops=1)
Output: (time_bucket('1 day'::interval, t_l_game_login_log.count_time)), count(t_l_game_login_log.id), t_l_game_login_log.app_id
Group Key: time_bucket('1 day'::interval, t_l_game_login_log.count_time), t_l_game_login_log.app_id
-> Custom Scan (ConstraintAwareAppend) (cost=0.00..712676.29 rows=12583034 width=16) (actual time=0.065..10334.102 rows=12650070 loops=1)
Output: time_bucket('1 day'::interval, t_l_game_login_log.count_time), t_l_game_login_log.app_id, t_l_game_login_log.id
Hypertable: t_l_game_login_log
Chunks left after exclusion: 36
-> Append (cost=0.00..681218.70 rows=12583034 width=16) (actual time=0.063..8630.928 rows=12650070 loops=1)
-> Index Scan using _hyper_32_1429_chunk_t_l_game_login_log_count_time_idx on _timescaledb_internal._hyper_32_1429_chunk (cost=0.43..14150.17 rows=259580 width=16) (actual time=0.062..154.435 rows=277763 loops=1)
Output: _hyper_32_1429_chunk.count_time, _hyper_32_1429_chunk.id, _hyper_32_1429_chunk.app_id
Index Cond: (_hyper_32_1429_chunk.count_time > (now() - '60 days'::interval))
-> Index Scan using _hyper_32_1430_chunk_t_l_game_login_log_count_time_idx on _timescaledb_internal._hyper_32_1430_chunk (cost=0.43..5594.26 rows=106791 width=16) (actual time=0.058..56.189 rows=106399 loops=1)
Output: _hyper_32_1430_chunk.count_time, _hyper_32_1430_chunk.id, _hyper_32_1430_chunk.app_id
Index Cond: (_hyper_32_1430_chunk.count_time > (now() - '60 days'::interval))
-> Index Scan using _hyper_32_1431_chunk_t_l_game_login_log_count_time_idx on _timescaledb_internal._hyper_32_1431_chunk (cost=0.43..11419.14 rows=213147 width=16) (actual time=0.067..141.589 rows=257787 loops=1)
Output: _hyper_32_1431_chunk.count_time, _hyper_32_1431_chunk.id, _hyper_32_1431_chunk.app_id
Index Cond: (_hyper_32_1431_chunk.count_time > (now() - '60 days'::interval))
-> Index Scan using _hyper_32_1432_chunk_t_l_game_login_log_count_time_idx on _timescaledb_internal._hyper_32_1432_chunk (cost=0.42..2065.82 rows=39474 width=16) (actual time=0.064..26.715 rows=44729 loops=1)
Output: _hyper_32_1432_chunk.count_time, _hyper_32_1432_chunk.id, _hyper_32_1432_chunk.app_id
Index Cond: (_hyper_32_1432_chunk.count_time > (now() - '60 days'::interval))
-> Seq Scan on _timescaledb_internal._hyper_32_1445_chunk (cost=0.00..27784.83 rows=572733 width=16) (actual time=0.015..331.596 rows=572733 loops=1)
Output: _hyper_32_1445_chunk.count_time, _hyper_32_1445_chunk.id, _hyper_32_1445_chunk.app_id
Filter: (_hyper_32_1445_chunk.count_time > (now() - '60 days'::interval))
-> Seq Scan on _timescaledb_internal._hyper_32_1446_chunk (cost=0.00..31036.78 rows=627016 width=16) (actual time=0.016..364.134 rows=626923 loops=1)
Output: _hyper_32_1446_chunk.count_time, _hyper_32_1446_chunk.id, _hyper_32_1446_chunk.app_id
Filter: (_hyper_32_1446_chunk.count_time > (now() - '60 days'::interval))
-> Seq Scan on _timescaledb_internal._hyper_32_1447_chunk (cost=0.00..12388.78 rows=262216 width=16) (actual time=0.016..149.345 rows=262216 loops=1)
Output: _hyper_32_1447_chunk.count_time, _hyper_32_1447_chunk.id, _hyper_32_1447_chunk.app_id
强制并行查询也不行
是我的问题还是 TimescaleDB 不支持并行?
google 也查不到 orz
1
lolizeppelin OP 是因为 Custom Scan (ConstraintAwareAppend) (cost=0.00..712676.29 rows=12583034 width=16) (actual time=0.065..10334.102 rows=12650070 loops=1)
时间太长导致的? |
2
lolizeppelin OP 感觉好像是 扫描 chunk 时间比较短没必要并行?
时间都花费在刚开始排除不需要的 chunk 上了? |
3
gtlions 2019-07-05 08:43:51 +08:00 via iPhone
没记错的话,是的不支持,当初在做技术选型的时候验证了这个,看介绍和一些自己的测试确实挺好的,然后在加载入库验证的是否发现怎么性能这么差,才发现居然会锁表,然后,没有然后了……
|
4
lolizeppelin OP @gtlions
不会吧 https://medium.com/@aiven_io/timescaledb-101-the-why-what-and-how-9c0eb08a7c0b 3.2. The second most optimal query 看这里... 原来好像是支持的 因为新版加了事务导致?还是说现在商业版才支持了 0 0 ? |
5
lolizeppelin OP 我联系了官方的客服.... 对面也说是支持的...
卧槽 我做错了啥 |
6
lolizeppelin OP @gtlions
...我似乎找到原因了 where 条件里不带秒 '2019-01-01 02:00:00' 可以并行 where 雕件里带秒'2019-05-07 19:59:37' 就没法并行了.........?卧槽这什么鬼 |