Author Topic: Photo Mechanic Plus Catalog stuck on Updating 1 Catalog, 10 Batches remaining  (Read 11328 times)

Offline Sallinen

  • Newcomer
  • *
  • Posts: 25
    • View Profile
Hi,

I have my photos and catalog on a NAS SMB share (I know.. Catalog is supposed to be local) because I use the same catalog on two computers.

The catalog worked ok for single computer through the NAS share but upon adding it to a second computer and adding photos to catalog the catalog Metadata Updates task got stuck in "Updating 1 Catalog, 10 Batches remaining" with following error displayed.

I do have a backup of the catalog so I can revert the adding of new photos I did on the second computer.

Code: [Select]
Thu 19:47:42] Error: RPCServerConn.rpc_dispatch: exception: CILA::PROTO::RPCException/apply_change_journal_batch failed: ["apply_change_journal_batch: SQLite3::SQLException: cannot commit - no transaction is active"] to=104 msg="apply_change_journal_batch" location=/Applications/Photo Mechanic Plus.app/Contents/pmruby/lib/ruby/site_ruby/2.2.0/cila/proto/rpc.rb:649:in `call', /Applications/Photo Mechanic Plus.app/Contents/pmruby/lib/ruby/site_ruby/2.2.0/cila/proto/rpc.rb:665:in `method_missing', ./archive/common/catalog.rb:279:in `do_apply_change_journal_batch', ./archive/common/catalog.rb:191:in `block in apply_change_journal_batch', /Applications/Photo Mechanic Plus.app/Contents/pmruby/lib/ruby/site_ruby/2.2.0/cila/syncwaiter.rb:85:in `synchronize', ./archive/common/catalog.rb:190:in `apply_change_journal_batch', /Applications/Photo Mechanic Plus.app/Contents/pmruby/lib/ruby/site_ruby/2.2.0/cila/proto/rpc_object.rb:72:in `block in initialize', /Applications/Photo Mechanic Plus.app/Contents/pmruby/lib/ruby/site_ruby/2.2.0/cila/proto/rpc.rb:1114:in `call', /Applications/Photo Mechanic Plus.app/Contents/pmruby/lib/ruby/site_ruby/2.2.0/cila/proto/rpc.rb:1114:in `block in do_rpc_dispatch', /Applications/Photo Mechanic Plus.app/Contents/pmruby/lib/ruby/site_ruby/2.2.0/cila/proto/rpc.rb:1113:in `catch', /Applications/Photo Mechanic Plus.app/Contents/pmruby/lib/ruby/site_ruby/2.2.0/cila/proto/rpc.rb:1113:in `do_rpc_dispatch', /Applications/Photo Mechanic Plus.app/Contents/pmruby/lib/ruby/site_ruby/2.2.0/cila/proto/rpc.rb:1062:in `block in rpc_dispatch', /Applications/Photo Mechanic Plus.app/Contents/pmruby/lib/ruby/site_ruby/2.2.0/cila/fiberpool.rb:60:in `call', /Applications/Photo Mechanic Plus.app/Contents/pmruby/lib/ruby/site_ruby/2.2.0/cila/fiberpool.rb:60:in `block in add_fiber'
[Thu 19:47:42] Error: CatalogMetadataUpdateTask.try_spawn_metadata_update: apply_change_journal_batch failed: ["apply_change_journal_batch: CILA::PROTO::RPCException/apply_change_journal_batch failed: [\"apply_change_journal_batch: SQLite3::SQLException: cannot commit - no transaction is active\"]"]

Offline Sallinen

  • Newcomer
  • *
  • Posts: 25
    • View Profile
Restarting Photo Mechanic Plus and re-integrating catalog seemed to solve the issue, catalog updated successfully.

Offline Sallinen

  • Newcomer
  • *
  • Posts: 25
    • View Profile
Next 'Include in Catalog' worked as expected.

Offline Kirk Baker

  • Senior Software Engineer
  • Camera Bits Staff
  • Superhero Member
  • *****
  • Posts: 24756
    • View Profile
    • Camera Bits, Inc.
Are you using the same catalog between two computers at the same time?  If so, that will corrupt your catalog eventually.  Sharing a catalog between two computers simultaneously is not supported.

-Kirk

Offline Sallinen

  • Newcomer
  • *
  • Posts: 25
    • View Profile
Are you using the same catalog between two computers at the same time?  If so, that will corrupt your catalog eventually.  Sharing a catalog between two computers simultaneously is not supported.

-Kirk

I'm using it on one computer at a time, the other has Photo Mechanic closed just to be sure.

As I'll be switching between two computers (going through 128 000 photos for a photo book) I decided to keep the catalog on a NAS so I wouldn't need to remember to copy it between the two.
The NAS has RAID + nightly ZFS snapshots which get rsync'd to a remote server "just in case" something goes wrong eventually.