mongodb - My mongoimport runs to infinity -


i did mongorestore of gzipped mongodump:

mongorestore -v --drop --gzip --db bigdata /volumes/lacie2tb/backup/mongo20170909/bigdata/ 

but kept going. left it, because figure if 'just' close now, (important) data corrupted. check percentages:

2017-09-10t14:45:58.385+0200    [########################]  bigdata.logs.sets.log  851.8 gb/85.2 gb  (999.4%) 2017-09-10t14:46:01.382+0200    [########################]  bigdata.logs.sets.log  852.1 gb/85.2 gb  (999.7%) 2017-09-10t14:46:04.381+0200    [########################]  bigdata.logs.sets.log  852.4 gb/85.2 gb  (1000.0%) 

and keeps going!

note other collections have finished. 1 goes beyond 100%. not understand.

this mongo 3.2.7 on mac osx.

there problem amount of data imported, because there not diskspace.

$ df -h filesystem      size   used  avail capacity   iused     ifree %iused mounted on /dev/disk3     477gi  262gi  214gi    56%  68749708  56193210   55%   / 

the amount of disk space used could right, because gzipped backup 200gb. not know if result in same amount of data on wiredtiger database snappy compression.

however, log keeps showing inserts:

2017-09-10t16:20:18.986+0200 command  [conn9] command bigdata.logs.sets.log command: insert { insert: "logs.sets.log", documents: 20, writeconcern: { getlasterror: 1, w: 1 }, ordered: false } ninserted:20 keyupdates:0 writeconflicts:0 numyields:0 reslen:40 locks:{ global: { acquirecount: { r: 19, w: 19 } }, database: { acquirecount: { w: 19 } }, collection: { acquirecount: { w: 19 } } } protocol:op_query 245ms 2017-09-10t16:20:19.930+0200 command  [conn9] command bigdata.logs.sets.log command: insert { insert: "logs.sets.log", documents: 23, writeconcern: { getlasterror: 1, w: 1 }, ordered: false } ninserted:23 keyupdates:0 writeconflicts:0 numyields:0 reslen:40 locks:{ global: { acquirecount: { r: 19, w: 19 } }, database: { acquirecount: { w: 19 } }, collection: { acquirecount: { w: 19 } } } protocol:op_query 190ms 

update

disk space still being consumed. 2 hours later, , 30 gb later:

$ df -h filesystem      size   used  avail capacity   iused     ifree %iused  mounted on /dev/disk3     477gi  290gi  186gi    61%  76211558  48731360   61%   / 

the question is: there bug in progress indicator, or there kind of loop keeps inserting same documents?

update

it finished.

2017-09-10t19:35:52.268+0200    [########################]  bigdata.logs.sets.log  1604.0 gb/85.2 gb  (1881.8%) 2017-09-10t19:35:52.268+0200    restoring indexes collection bigdata.logs.sets.log metadata 2017-09-10t20:16:51.882+0200    finished restoring bigdata.logs.sets.log (3573548 documents) 2017-09-10t20:16:51.882+0200    done 

604.0 gb/85.2 gb (1881.8%)

interesting. :)

it looks similar bug: https://jira.mongodb.org/browse/tools-1579

there seems fix backported 3.5 , 3.4. fix might not backported 3.2. i'm thinking problem might have using gzip and/or snappy compression.


Comments

Popular posts from this blog

resizing Telegram inline keyboard -

command line - How can a Python program background itself? -

php - "cURL error 28: Resolving timed out" on Wordpress on Azure App Service on Linux -