Description
运行命令:
/usr/local/esm/bin/esm -s http://10.20.4.148:9204 -x jw_account_twitter_www -d http://10.20.5.92:9204 -y jw_account_twitter_www -w 10 -b 20 -c 5000 --sliced_scroll_size=5 --buffer_count=500000 -t 480m --refresh
日志输出:
[05-21 14:02:07] [INF] [main.go:474,main] start data migration..
Scroll 245000 / 489705040 [>--------------------------------------------------------------------------------------------] 0.05% 3s
Bulk 0 / 489705040 [--------------------------------------------------------------------------------------------] 0.00%
[05-21 14:02:10] [ERR] [scroll.go:112,ProcessScrollResult] {"bytes_limit":19971597926,"bytes_wanted":23697591646,"durability":"PERMANENT","reason":"[parent] Data too large, data for [indices:data/read/sea
Scroll 1965000 / 489705040 [>--------------------------------------------------------------------------------------------] 0.40% 41s
Bulk 1262571 / 489705040 [>--------------------------------------------------------------------------------------------] 0.26% 4h19m20s
[05-21 14:02:48] [ERR] [scroll.go:112,ProcessScrollResult] {"bytes_limit":19971597926,"bytes_wanted":22085485606,"durability":"PERMANENT","reason":"[parent] Data too large, data for [indices:data/read/sea
Scroll 1975000 / 489705040 [>--------------------------------------------------------------------------------------------] 0.40% 41s
Bulk 1262571 / 489705040 [>---------------------------------------------------------------------------------------------] 0.26% 4h19m20s
[05-21 14:02:48] [ERR] [scroll.go:112,ProcessScrollResult]{"bytes_limit":19971597926,"bytes_wanted":24417518630,"durability":"PERMANENT","reason":"[parent] Data too large, data for [indices:data/read/sea
Scroll 3505000 / 489705040 [=>--------------------------------------------------------------------------------------------] 0.72% 2m16s
Bulk 3504849 / 489705040 [=>---------------------------------------------------------------------------------------------] 0.72% 2m16s
[05-21 14:04:24] [INF] [main.go:505,main] data migration finished.
备注:
迁移的索引是以前从6版本迁移到7版本的,索引的type类型是content,现在又要从7.17.4迁移回6.8.6,但是迁移不成功,还有其它索引只迁到同一个位置就会停止迁移,日志输出迁移结束
例如:
Bulk 188633543 / 524941135 [===================>--------------------------------------------] 35.93% 2h41m28s
[05-21 13:16:21] [ERR] [scroll.go:110,ProcessScrollResult] {"bytes_limit":19971597926,"bytes_wanted":29064105238,"durability":"PERMANENT","reason":"[parent] Data too large, data for [indices:data/read/sea
Scroll 193625000 / 524941135 [===================>--------------------------------------------] 36.89% 1h33m27s
Bulk 193625000 / 524941135 [====================>--------------------------------------------] 36.89% 1h33m27s
每次只迁移到193625000文档数就会停止
请问是怎么回事?