Conversation
Management command to move deployments and all related data between projects. Handles all direct and indirect relationships including Events, SourceImages, Occurrences, Jobs, Detections, Classifications, Identifications, SourceImageCollections (with mixed-collection splitting), pipeline configs, processing services, and taxa M2M links. Features: - Dry-run mode by default, --execute to commit - Per-deployment before/after snapshots with row counts - Conservation checks (source + target = original) - FK integrity and indirect access validation - Shared resource handling (clone vs reassign devices, sites, S3 sources) - Raw SQL for ProcessingService M2M to avoid ORM column mismatch Co-Authored-By: Claude <noreply@anthropic.com>
Documents the full relationship map, edge cases, and validation checklist for moving deployments between projects. Co-Authored-By: Claude <noreply@anthropic.com>
The reassign_deployments command now also recalculates: - Event cached counts (captures_count, detections_count, occurrences_count) - Both source and target project related calculated fields Co-Authored-By: Claude <noreply@anthropic.com>
- Renamed from reassign_deployments to move_project_data to better communicate the scope of the operation (all occurrences, identifications, classifications, etc. — not just deployment records) - Added conditional scope warning that shows full data weight when processed data exists, or a simple note for unprocessed transfers - Added identifier membership: users who made identifications on moved data are auto-added to the target project with Identifier role - Added default filter config copy (score threshold, include/exclude taxa, default processing pipeline) Co-Authored-By: Claude <noreply@anthropic.com>
Co-Authored-By: Claude <noreply@anthropic.com>
- Identifier users now get their existing source project role preserved (e.g. ProjectManager stays ProjectManager), falling back to Identifier role only for users who aren't source project members - TaxaLists linked to source project are now also linked to target project (M2M add, not move — TaxaLists can be shared) - Dry-run output shows TaxaLists and role assignments that will be made Co-Authored-By: Claude <noreply@anthropic.com>
✅ Deploy Preview for antenna-ssec canceled.
|
✅ Deploy Preview for antenna-preview canceled.
|
📝 WalkthroughWalkthroughA new Django management command Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Pull request overview
Adds an operator-facing management command to move one or more deployments (and their dependent data) from a source project to a target/new project, plus a written guide documenting the relationship map and validation steps for doing so safely.
Changes:
- Added
move_project_dataDjango management command implementing deployment reassignment with dry-run analysis, execution, and post-move validation. - Added planning/ops documentation describing the full relationship map, edge cases (shared resources, mixed collections), and a validation checklist.
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 10 comments.
| File | Description |
|---|---|
| docs/claude/planning/deployment-reassignment-guide.md | New guide documenting deployment reassignment relationships, strategies, and validation checklist. |
| ami/main/management/commands/move_project_data.py | New management command to analyze and execute moving deployments and related data between projects. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| | **S3StorageSource** | FK on Deployment (`data_source`) | Clone if `project_id` points to source project; update deployment FK | | ||
| | **Device** | FK on Deployment | Clone if `project_id` = source project; or set NULL (shared) | | ||
| | **Site** | FK on Deployment (`research_site`) | Clone if `project_id` = source project; or set NULL (shared) | |
There was a problem hiding this comment.
The guide’s reassignment strategy for S3StorageSource/Device/Site says “Clone if project_id points to source project”, but the implementation clones only when the resource is shared with other deployments and otherwise reassigns the same row to the target project. Update the guide to reflect the shared-vs-exclusive behavior to avoid operators expecting a clone in all cases.
| | **S3StorageSource** | FK on Deployment (`data_source`) | Clone if `project_id` points to source project; update deployment FK | | |
| | **Device** | FK on Deployment | Clone if `project_id` = source project; or set NULL (shared) | | |
| | **Site** | FK on Deployment (`research_site`) | Clone if `project_id` = source project; or set NULL (shared) | | |
| | **S3StorageSource** | FK on Deployment (`data_source`) | If shared with other deployments in the source project, clone and point the moved deployment to the clone; if exclusive to this deployment, reassign the existing row’s `project_id` to the target project and update the deployment FK | | |
| | **Device** | FK on Deployment | If shared with other deployments in the source project, clone for the moved deployment; if exclusive, reassign the existing row’s `project_id` to the target project. For globally shared devices or devices with no project, you may instead set the deployment’s device FK to NULL | | |
| | **Site** | FK on Deployment (`research_site`) | If shared with other deployments in the source project, clone for the moved deployment; if exclusive, reassign the existing row’s `project_id` to the target project. For shared/global sites, you may instead set the deployment’s site FK to NULL | |
| # Update both projects' related calculated fields (events + deployments) | ||
| self.log(f" Updating source project cached fields...") | ||
| source_project.update_related_calculated_fields() | ||
| self.log(f" Source project '{source_project.name}': related fields updated") | ||
|
|
||
| self.log(f" Updating target project cached fields...") | ||
| target_project.update_related_calculated_fields() | ||
| self.log(f" Target project (id={target_id}): related fields updated") | ||
|
|
There was a problem hiding this comment.
Project.update_related_calculated_fields() iterates over all events and deployments in the project and calls .update_calculated_fields(save=True) per row (N+1 writes). On large projects this can dominate runtime even when only a few deployments were moved. Consider updating only affected deployments/events (you already have deployment_ids and moved_event_pks) or adding a bulk recalculation helper for project stats.
| # Update both projects' related calculated fields (events + deployments) | |
| self.log(f" Updating source project cached fields...") | |
| source_project.update_related_calculated_fields() | |
| self.log(f" Source project '{source_project.name}': related fields updated") | |
| self.log(f" Updating target project cached fields...") | |
| target_project.update_related_calculated_fields() | |
| self.log(f" Target project (id={target_id}): related fields updated") |
| # 15. Copy default filter config to target project | ||
| if include_taxa or exclude_taxa: | ||
| for t in source_project.default_filters_include_taxa.all(): | ||
| target_project.default_filters_include_taxa.add(t) | ||
| for t in source_project.default_filters_exclude_taxa.all(): | ||
| target_project.default_filters_exclude_taxa.add(t) | ||
| self.log(" [15/16] Copied default filter taxa config") | ||
| target_project.default_filters_score_threshold = source_project.default_filters_score_threshold | ||
| if source_project.default_processing_pipeline: | ||
| target_project.default_processing_pipeline = source_project.default_processing_pipeline | ||
| target_project.save() |
There was a problem hiding this comment.
Default filter taxa copying uses .add() on the target project without clearing any existing include/exclude taxa. If the target project already has defaults configured, this results in a merged/superset config rather than a true copy from the source. If the intent is to copy/replace, clear the target M2Ms first (or explicitly document the merge behavior).
| for coll, target_count, other_count in mixed_collections: | ||
| moved_images = coll.images.filter(deployment_id__in=deployment_ids) | ||
| moved_image_ids = list(moved_images.values_list("pk", flat=True)) | ||
|
|
||
| if clone_collections: | ||
| new_coll = SourceImageCollection.objects.create( | ||
| name=coll.name, | ||
| project_id=target_id, | ||
| description=coll.description or "", | ||
| ) | ||
| new_coll.images.set(moved_image_ids) | ||
| self.log( | ||
| f" [10/12] Split collection '{coll.name}': " | ||
| f"cloned {len(moved_image_ids):,} images → target collection id={new_coll.pk}" | ||
| ) | ||
|
|
||
| coll.images.remove(*moved_image_ids) | ||
| self.log(f" [10/12] Removed {len(moved_image_ids):,} images from source collection '{coll.name}'") |
There was a problem hiding this comment.
Collection splitting materializes all moved image PKs into Python and then calls coll.images.remove(*moved_image_ids). For large moves (tens of thousands of images) this can be very slow and may exceed DB/query parameter limits. Prefer operating on the M2M through table with queryset deletes/insert-select (e.g., delete rows where sourceimage__deployment_id__in=...) and, if cloning, bulk-create through rows in batches instead of passing a huge *args list.
| cloned_count = 0 | ||
| for config in ProjectPipelineConfig.objects.filter(project_id=source_project_id): | ||
| if config.pipeline_id not in existing_pipelines: | ||
| ProjectPipelineConfig.objects.create(project_id=target_id, pipeline_id=config.pipeline_id) |
There was a problem hiding this comment.
Pipeline config cloning only copies pipeline_id and ignores enabled and config fields on ProjectPipelineConfig. This will silently change pipeline behavior in the target project (e.g., disabled pipelines become enabled; custom config lost). Clone the full config (at least enabled and config) when creating target configs.
| ProjectPipelineConfig.objects.create(project_id=target_id, pipeline_id=config.pipeline_id) | |
| ProjectPipelineConfig.objects.create( | |
| project_id=target_id, | |
| pipeline_id=config.pipeline_id, | |
| enabled=getattr(config, "enabled", True), | |
| config=getattr(config, "config", None), | |
| ) |
| dets_via_project = Detection.objects.filter(source_image__project_id=target_id).count() | ||
| dets_via_dep = Detection.objects.filter(source_image__deployment_id__in=deployment_ids).count() | ||
| if dets_via_project != dets_via_dep: | ||
| errors.append(f"Detection count mismatch: via project={dets_via_project}, via deployment={dets_via_dep}") | ||
| self.log( | ||
| f" FAIL: Detection count mismatch: via project={dets_via_project}, via deployment={dets_via_dep}" | ||
| ) | ||
| else: | ||
| self.log( | ||
| f" OK: Detections consistent ({dets_via_project:,} via project, {dets_via_dep:,} via deployment)" | ||
| ) |
There was a problem hiding this comment.
dets_via_project counts all detections in the target project, then compares to detections for only the moved deployments. If the target project already has data, this check will fail (or worse, pass/fail for the wrong reasons). Scope the “via project” counts to the moved deployments as well (and similarly for classifications/identifications) so the validation actually verifies the moved slice.
| for taxon in Taxon.objects.filter(pk__in=taxa_ids): | ||
| taxon.projects.add(target_project) |
There was a problem hiding this comment.
Linking taxa iterates and performs taxon.projects.add(target_project) per taxon. For large moves this becomes an N+1 write pattern. Prefer a bulk M2M add via the reverse relation (e.g., target_project.taxa.add(*taxa_ids)) or bulk-creating through rows in batches to reduce DB round-trips.
| for taxon in Taxon.objects.filter(pk__in=taxa_ids): | |
| taxon.projects.add(target_project) | |
| # Bulk-link taxa to avoid N+1 writes on the Taxon.projects M2M | |
| target_project.taxa.add(*taxa_ids) |
| target_project = Project(name=create_project_name, owner=source_project.owner) | ||
| target_project.save() |
There was a problem hiding this comment.
When creating a new target project, this uses Project(...) + save(), which bypasses ProjectManager.create() and its create_related_defaults() behavior (default device/site/deployment/collection/processing service). This can leave the new project missing expected defaults and behaving differently from projects created elsewhere. Prefer Project.objects.create(..., create_defaults=...) (explicitly choose True/False) to match established project-creation semantics.
| target_project = Project(name=create_project_name, owner=source_project.owner) | |
| target_project.save() | |
| target_project = Project.objects.create( | |
| name=create_project_name, | |
| owner=source_project.owner, | |
| create_defaults=True, | |
| ) |
| if cls_via_project != cls_via_dep: | ||
| errors.append( | ||
| f"Classification count mismatch: via project={cls_via_project}, via deployment={cls_via_dep}" | ||
| ) |
There was a problem hiding this comment.
On classification mismatch, an error is appended but no FAIL log line is emitted (unlike detections/identifications). This makes validation output inconsistent and can hide the specific failure in logs. Add a self.log("FAIL: ...") branch here similar to the detections check.
| ) | |
| ) | |
| self.log( | |
| f" FAIL: Classification count mismatch: via project={cls_via_project}, via deployment={cls_via_dep}" | |
| ) |
| else: | ||
| self.log(" ALL VALIDATION CHECKS PASSED", style=self.style.SUCCESS) | ||
| self.log(f"{'=' * 60}") |
There was a problem hiding this comment.
If validation fails, the command only logs errors but does not raise an exception / return a non-zero exit code. That makes failures easy to miss in automation and contradicts the “rolls back cleanly on any failure” expectation—especially since validation happens after the atomic block has committed. Consider moving validation inside the transaction (so you can raise and rollback) and/or raising CommandError at the end when errors is non-empty.
| else: | |
| self.log(" ALL VALIDATION CHECKS PASSED", style=self.style.SUCCESS) | |
| self.log(f"{'=' * 60}") | |
| self.log(f"{'=' * 60}") | |
| raise CommandError("Post-move validation failed; see log output above for details.") | |
| else: | |
| self.log(" ALL VALIDATION CHECKS PASSED", style=self.style.SUCCESS) | |
| self.log(f"{'=' * 60}") |
Design spec for making projects portable between Antenna instances. Covers UUID fields, Organization model, natural keys, export/import commands, and Darwin Core integration. Includes 21 research areas spanning internal data validation, biodiversity standards (GBIF, iNaturalist, BOLD, Camtrap DP), and patterns from non-biodiversity applications (GitLab, WordPress, Notion, Jira). Status: draft — pending research and validation. Co-Authored-By: Claude <noreply@anthropic.com>
Documents concrete use cases for each model's UUID beyond export/import: Darwin Core field mappings (occurrenceID, eventID, etc.), ML pipeline coordination (Pipeline/Algorithm slug collision risk), device tracking across institutions, and scientific reproducibility requirements. Co-Authored-By: Claude <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 6
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@ami/main/management/commands/move_project_data.py`:
- Line 344: The code is logging user.email via self.log(...) and sending it to
logger.info, which exposes PII; update the occurrences that reference user.email
(the self.log(...) call and the logger.info usage) to avoid storing emails by
either logging a non-PII identifier (e.g., user.id or user.pk), a masked email
(e.g., mask local part), or a stable hash of the email, while preserving
role_to_assign.display_name and role_source in the message; ensure both the
self.log(...) invocation and the corresponding logger.info(...) call are changed
consistently to use the non-PII value and not user.email.
- Line 333: These lines use f-strings with no interpolation (triggering F541)
and likely loop-captured variables in inner scopes (B007); replace f-string
literals like self.log(f"\n Identifiers (users with identifications on moved
data):") with plain string literals (self.log("\n Identifiers (users with
identifications on moved data):")) and similarly update the other occurrences at
the reported locations (lines showing self.log(f"...")); search for all
self.log(f"...") instances in move_project_data.py and convert those with no
placeholders to normal strings, and where B007 arises, ensure any loop variables
used inside nested functions/comprehensions are passed as defaults or copied
into a new local variable before inner usage.
- Around line 511-515: The cloned SourceImageCollection created in
SourceImageCollection.objects.create currently only copies name, project_id and
description which drops sampling metadata; update the creation to also copy the
sampling fields from the original collection by including method and kwargs
(e.g., set method=coll.method and kwargs=coll.kwargs) so the new_coll preserves
the original collection semantics when moved.
- Around line 175-187: The command currently prefers create_project when both
--create-project and --target-project are passed; update the options handling in
move_project_data (around create_project_name, target_project logic) to detect
when both options are provided and raise a CommandError (e.g., "Specify only one
of --target-project or --create-project") instead of silently preferring
create_project; keep the existing branches for the single-option cases
(resolving target_project via Project.objects.get and logging, or setting
target_project=None and create_project_name) unchanged.
- Around line 695-723: The validation wrongly compares all records in the target
project to only the moved deployments; update the *_via_project queries
(dets_via_project, cls_via_project, idents_via_project) to also filter by
deployment_ids so they count only records for the moved deployments in the
target project (e.g. add source_image__deployment_id__in=deployment_ids to
Detection/Classifications filters and
occurrence__deployment_id__in=deployment_ids to Identification filters while
keeping source_image__project_id=target_id / occurrence__project_id=target_id).
In `@docs/claude/planning/deployment-reassignment-guide.md`:
- Around line 36-37: Update the wording so it matches the actual command
behavior: change the Device and Site guidance that currently suggests setting
their FK to NULL to instead state that the command clones or reassigns Device
and Site records (do not describe NULL as an option), and update the
project-creation description to indicate the command only sets the target
project's name and owner (omit mention of description). Ensure the edits touch
the same Device/Site lines and the project creation sentences referenced (the
blocks around the Device/Site rows and the project creation paragraph) so the
doc reflects cloned/reassigned ownership and limited project fields.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: fbde3add-8c7c-4079-bf02-a2aa73a1845a
📒 Files selected for processing (3)
ami/main/management/commands/move_project_data.pydocs/claude/planning/deployment-reassignment-guide.mddocs/claude/planning/project-portability-spec.md
| create_project_name = options.get("create_project") | ||
| if create_project_name: | ||
| self.log(f"Target project: NEW — '{create_project_name}'") | ||
| target_project = None | ||
| elif options.get("target_project"): | ||
| try: | ||
| target_project = Project.objects.get(pk=options["target_project"]) | ||
| except Project.DoesNotExist: | ||
| raise CommandError(f"Target project {options['target_project']} does not exist") | ||
| create_project_name = None | ||
| self.log(f"Target project: {target_project.name} (id={target_project.pk})") | ||
| else: | ||
| raise CommandError("Must specify either --target-project or --create-project") |
There was a problem hiding this comment.
--target-project and --create-project should be mutually exclusive.
If both are provided, Line 176 currently prefers --create-project silently. That can move data into an unintended newly created project.
Proposed fix
create_project_name = options.get("create_project")
+ if create_project_name and options.get("target_project"):
+ raise CommandError("Use either --target-project or --create-project, not both")
+
if create_project_name:
self.log(f"Target project: NEW — '{create_project_name}'")
target_project = None📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| create_project_name = options.get("create_project") | |
| if create_project_name: | |
| self.log(f"Target project: NEW — '{create_project_name}'") | |
| target_project = None | |
| elif options.get("target_project"): | |
| try: | |
| target_project = Project.objects.get(pk=options["target_project"]) | |
| except Project.DoesNotExist: | |
| raise CommandError(f"Target project {options['target_project']} does not exist") | |
| create_project_name = None | |
| self.log(f"Target project: {target_project.name} (id={target_project.pk})") | |
| else: | |
| raise CommandError("Must specify either --target-project or --create-project") | |
| create_project_name = options.get("create_project") | |
| if create_project_name and options.get("target_project"): | |
| raise CommandError("Use either --target-project or --create-project, not both") | |
| if create_project_name: | |
| self.log(f"Target project: NEW — '{create_project_name}'") | |
| target_project = None | |
| elif options.get("target_project"): | |
| try: | |
| target_project = Project.objects.get(pk=options["target_project"]) | |
| except Project.DoesNotExist: | |
| raise CommandError(f"Target project {options['target_project']} does not exist") | |
| create_project_name = None | |
| self.log(f"Target project: {target_project.name} (id={target_project.pk})") | |
| else: | |
| raise CommandError("Must specify either --target-project or --create-project") |
🧰 Tools
🪛 Ruff (0.15.7)
[warning] 183-183: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling
(B904)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@ami/main/management/commands/move_project_data.py` around lines 175 - 187,
The command currently prefers create_project when both --create-project and
--target-project are passed; update the options handling in move_project_data
(around create_project_name, target_project logic) to detect when both options
are provided and raise a CommandError (e.g., "Specify only one of
--target-project or --create-project") instead of silently preferring
create_project; keep the existing branches for the single-option cases
(resolving target_project via Project.objects.get and logging, or setting
target_project=None and create_project_name) unchanged.
| from ami.users.models import User | ||
| from ami.users.roles import Identifier, Role | ||
|
|
||
| self.log(f"\n Identifiers (users with identifications on moved data):") |
There was a problem hiding this comment.
Ruff-reported lint errors/warnings should be cleaned up before merge.
Static analysis flags F541 and B007 in this segment set. These are straightforward fixes and help keep CI green.
Proposed fix
- self.log(f"\n Identifiers (users with identifications on moved data):")
+ self.log("\n Identifiers (users with identifications on moved data):")
- self.log(f"\n Source project default filters:")
+ self.log("\n Source project default filters:")
- self.log(f"\n TaxaLists linked to source project:")
+ self.log("\n TaxaLists linked to source project:")
- for coll, target_count, other_count in mixed_collections:
+ for coll, _target_count, _other_count in mixed_collections:
- self.log(f" Updating source project cached fields...")
+ self.log(" Updating source project cached fields...")
- self.log(f" Updating target project cached fields...")
+ self.log(" Updating target project cached fields...")
- self.log(f" OK: No moved images in source collections" if not any("collection" in e for e in errors) else "")
+ self.log(" OK: No moved images in source collections" if not any("collection" in e for e in errors) else "")Also applies to: 347-347, 359-359, 506-506, 616-616, 620-620, 739-739
🧰 Tools
🪛 Ruff (0.15.7)
[error] 333-333: f-string without any placeholders
Remove extraneous f prefix
(F541)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@ami/main/management/commands/move_project_data.py` at line 333, These lines
use f-strings with no interpolation (triggering F541) and likely loop-captured
variables in inner scopes (B007); replace f-string literals like self.log(f"\n
Identifiers (users with identifications on moved data):") with plain string
literals (self.log("\n Identifiers (users with identifications on moved
data):")) and similarly update the other occurrences at the reported locations
(lines showing self.log(f"...")); search for all self.log(f"...") instances in
move_project_data.py and convert those with no placeholders to normal strings,
and where B007 arises, ensure any loop variables used inside nested
functions/comprehensions are passed as defaults or copied into a new local
variable before inner usage.
| role_to_assign = Identifier | ||
| role_source = "default (not a source member)" | ||
| identifier_role_map[uid] = role_to_assign | ||
| self.log(f" {user.email}: " f"{role_to_assign.display_name} ({role_source})") |
There was a problem hiding this comment.
Avoid logging user emails to application logs.
Line 344 and Line 567 log email addresses and also send them to logger.info. This creates avoidable PII retention in centralized logs.
Proposed fix
- self.log(f" {user.email}: " f"{role_to_assign.display_name} ({role_source})")
+ self.log(f" user_id={user.pk}: {role_to_assign.display_name} ({role_source})")
...
- self.log(f" [13/16] Added {user.email}" f" as {role_cls.display_name}")
+ self.log(f" [13/16] Added user_id={user.pk} as {role_cls.display_name}")Also applies to: 567-567
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@ami/main/management/commands/move_project_data.py` at line 344, The code is
logging user.email via self.log(...) and sending it to logger.info, which
exposes PII; update the occurrences that reference user.email (the self.log(...)
call and the logger.info usage) to avoid storing emails by either logging a
non-PII identifier (e.g., user.id or user.pk), a masked email (e.g., mask local
part), or a stable hash of the email, while preserving
role_to_assign.display_name and role_source in the message; ensure both the
self.log(...) invocation and the corresponding logger.info(...) call are changed
consistently to use the non-PII value and not user.email.
| new_coll = SourceImageCollection.objects.create( | ||
| name=coll.name, | ||
| project_id=target_id, | ||
| description=coll.description or "", | ||
| ) |
There was a problem hiding this comment.
Cloned SourceImageCollection drops sampling metadata.
On Line 511 through Line 515, cloned collections copy only name and description. method and kwargs are not preserved, which can change collection semantics after the move.
Proposed fix
if clone_collections:
new_coll = SourceImageCollection.objects.create(
name=coll.name,
project_id=target_id,
description=coll.description or "",
+ method=coll.method,
+ kwargs=coll.kwargs or {},
)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| new_coll = SourceImageCollection.objects.create( | |
| name=coll.name, | |
| project_id=target_id, | |
| description=coll.description or "", | |
| ) | |
| new_coll = SourceImageCollection.objects.create( | |
| name=coll.name, | |
| project_id=target_id, | |
| description=coll.description or "", | |
| method=coll.method, | |
| kwargs=coll.kwargs or {}, | |
| ) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@ami/main/management/commands/move_project_data.py` around lines 511 - 515,
The cloned SourceImageCollection created in SourceImageCollection.objects.create
currently only copies name, project_id and description which drops sampling
metadata; update the creation to also copy the sampling fields from the original
collection by including method and kwargs (e.g., set method=coll.method and
kwargs=coll.kwargs) so the new_coll preserves the original collection semantics
when moved.
| dets_via_project = Detection.objects.filter(source_image__project_id=target_id).count() | ||
| dets_via_dep = Detection.objects.filter(source_image__deployment_id__in=deployment_ids).count() | ||
| if dets_via_project != dets_via_dep: | ||
| errors.append(f"Detection count mismatch: via project={dets_via_project}, via deployment={dets_via_dep}") | ||
| self.log( | ||
| f" FAIL: Detection count mismatch: via project={dets_via_project}, via deployment={dets_via_dep}" | ||
| ) | ||
| else: | ||
| self.log( | ||
| f" OK: Detections consistent ({dets_via_project:,} via project, {dets_via_dep:,} via deployment)" | ||
| ) | ||
|
|
||
| cls_via_project = Classification.objects.filter(detection__source_image__project_id=target_id).count() | ||
| cls_via_dep = Classification.objects.filter(detection__source_image__deployment_id__in=deployment_ids).count() | ||
| if cls_via_project != cls_via_dep: | ||
| errors.append( | ||
| f"Classification count mismatch: via project={cls_via_project}, via deployment={cls_via_dep}" | ||
| ) | ||
| else: | ||
| self.log(f" OK: Classifications consistent ({cls_via_project:,})") | ||
|
|
||
| idents_via_project = Identification.objects.filter(occurrence__project_id=target_id).count() | ||
| idents_via_dep = Identification.objects.filter(occurrence__deployment_id__in=deployment_ids).count() | ||
| if idents_via_project != idents_via_dep: | ||
| errors.append( | ||
| f"Identification count mismatch: via project={idents_via_project}, via deployment={idents_via_dep}" | ||
| ) | ||
| else: | ||
| self.log(f" OK: Identifications consistent ({idents_via_project:,})") |
There was a problem hiding this comment.
Validation currently compares moved data to the entire target project.
On Line 695 through Line 723, *_via_project counts all records in the target project, while *_via_dep counts only moved deployments. If the target already has data, this can produce false validation failures.
Proposed fix
- dets_via_project = Detection.objects.filter(source_image__project_id=target_id).count()
+ dets_via_project = Detection.objects.filter(
+ source_image__deployment_id__in=deployment_ids,
+ source_image__project_id=target_id,
+ ).count()
dets_via_dep = Detection.objects.filter(source_image__deployment_id__in=deployment_ids).count()
- cls_via_project = Classification.objects.filter(detection__source_image__project_id=target_id).count()
+ cls_via_project = Classification.objects.filter(
+ detection__source_image__deployment_id__in=deployment_ids,
+ detection__source_image__project_id=target_id,
+ ).count()
cls_via_dep = Classification.objects.filter(detection__source_image__deployment_id__in=deployment_ids).count()
- idents_via_project = Identification.objects.filter(occurrence__project_id=target_id).count()
+ idents_via_project = Identification.objects.filter(
+ occurrence__deployment_id__in=deployment_ids,
+ occurrence__project_id=target_id,
+ ).count()
idents_via_dep = Identification.objects.filter(occurrence__deployment_id__in=deployment_ids).count()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@ami/main/management/commands/move_project_data.py` around lines 695 - 723,
The validation wrongly compares all records in the target project to only the
moved deployments; update the *_via_project queries (dets_via_project,
cls_via_project, idents_via_project) to also filter by deployment_ids so they
count only records for the moved deployments in the target project (e.g. add
source_image__deployment_id__in=deployment_ids to Detection/Classifications
filters and occurrence__deployment_id__in=deployment_ids to Identification
filters while keeping source_image__project_id=target_id /
occurrence__project_id=target_id).
| | **Device** | FK on Deployment | Clone if `project_id` = source project; or set NULL (shared) | | ||
| | **Site** | FK on Deployment (`research_site`) | Clone if `project_id` = source project; or set NULL (shared) | |
There was a problem hiding this comment.
Guide should align with implemented command behavior for ownership handling.
This section currently states Device/Site can be set to NULL and that project creation includes description, but the command clones/reassigns Device/Site and creates target project with name/owner only. Please align wording to avoid operational confusion.
Also applies to: 49-50, 69-75
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/claude/planning/deployment-reassignment-guide.md` around lines 36 - 37,
Update the wording so it matches the actual command behavior: change the Device
and Site guidance that currently suggests setting their FK to NULL to instead
state that the command clones or reassigns Device and Site records (do not
describe NULL as an option), and update the project-creation description to
indicate the command only sets the target project's name and owner (omit mention
of description). Ensure the edits touch the same Device/Site lines and the
project creation sentences referenced (the blocks around the Device/Site rows
and the project creation paragraph) so the doc reflects cloned/reassigned
ownership and limited project fields.
Summary
Management command to move deployments and all associated data from one project to another. Motivated by a request to move northern Quebec stations (Umiujaq, Kuujjuaq, Kangiqsujuaq, Inukjuak) from "Insectarium de Montreal" into a new "Nunavik" project.
This is a comprehensive data transfer that handles 16 categories of related data:
Usage
Features
Test plan
Summary by CodeRabbit
New Features
Documentation