-
Notifications
You must be signed in to change notification settings - Fork 10
Update lastTransitionTime for volume and nic in the machineCondition
#1428
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Update lastTransitionTime for volume and nic in the machineCondition
#1428
Conversation
2e148a0 to
6c44f35
Compare
6c44f35 to
30733fa
Compare
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
WalkthroughThe machine controller's status computation mechanism is refactored to merge conditions incrementally instead of overwriting them. The ready condition type is renamed to "MachineReady," and transition time calculations for volume and network interface readiness are improved to use per-item state transitions. Related test expectations are updated accordingly. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🤖 Fix all issues with AI agents
In @poollet/machinepoollet/controllers/machine_controller.go:
- Around line 628-635: When status == corev1.ConditionTrue and iterating
nicStatuses to set lastTransitionTime, the code currently picks the first
non-nil LastStateTransitionTime; instead choose the most recent timestamp (the
max) among all nicStatus.LastStateTransitionTime values similar to
computeVolumesReadyCondition. Update the loop over nicStatuses to compare each
non-nil nicStatus.LastStateTransitionTime and assign lastTransitionTime to the
latest (largest) time, leaving lastTransitionTime nil if none found.
- Around line 651-671: The mergeMachineConditions function currently appends new
conditions and never removes old ones, causing unbounded growth; change it to
replace or update the existing condition entry for the same Type instead of
appending: build a map from condition Type to its index from the existing slice,
then for each newCond in current either overwrite the existing entry at that
index (updating Status, Reason, Message, and LastTransitionTime appropriately)
or insert if not present; ensure you update lastStatus bookkeeping (the same map
or recompute) so duplicate types are not left behind and the Conditions slice
remains deduplicated per Type.
- Around line 586-593: The current block that sets lastTransitionTime when
status == corev1.ConditionTrue takes the first non-nil
vs.LastStateTransitionTime from volumeStatuses which can understate when the
condition actually transitioned; instead iterate all volumeStatuses and compute
the most recent (maximum) LastStateTransitionTime (compare pointers/dereferenced
times safely, skipping nils) and assign that max to lastTransitionTime so it
reflects the latest volume transition when the overall condition is true.
- Around line 554-556: Change the hard-coded string Type for the MachineReady
condition to use the explicit enum type like the other conditions: replace the
current Type: "MachineReady" in the Machine condition construction with Type:
computev1alpha1.MachineConditionType("MachineReady") so it matches the pattern
used for VolumesReady and NetworkInterfacesReady within the same controller.
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
poollet/machinepoollet/controllers/machine_controller.gopoollet/machinepoollet/controllers/machine_controller_test.go
🧰 Additional context used
🧬 Code graph analysis (2)
poollet/machinepoollet/controllers/machine_controller.go (3)
iri/apis/machine/v1alpha1/api.pb.go (3)
Conditions(991-1000)Conditions(1013-1013)Conditions(1028-1030)api/compute/v1alpha1/machine_types.go (4)
VolumeStateAttached(186-186)MachineCondition(193-204)MachineConditionType(190-190)NetworkInterfaceStateAttached(162-162)internal/apis/compute/machine_types.go (4)
VolumeStateAttached(174-174)MachineCondition(181-192)MachineConditionType(178-178)NetworkInterfaceStateAttached(150-150)
poollet/machinepoollet/controllers/machine_controller_test.go (2)
api/compute/v1alpha1/machine_types.go (1)
MachineConditionType(190-190)internal/apis/compute/machine_types.go (1)
MachineConditionType(178-178)
🔇 Additional comments (4)
poollet/machinepoollet/controllers/machine_controller_test.go (2)
199-220: LGTM! Test expectations correctly updated.The test assertions properly align with the controller changes:
- Condition type renamed to
"MachineReady"- Volume and NIC readiness messages now use dynamic resource names (
volume.Name,nic.Name)
508-536: Condition type updates are consistent with the controller changes.poollet/machinepoollet/controllers/machine_controller.go (2)
507-508: Condition handling refactored to use incremental merging.The status update now computes conditions and merges them with existing conditions, supporting condition history persistence as per PR objectives.
1022-1022: Good catch fixing the log message.The log message now correctly reflects that this handler deals with network interfaces, not secrets.
| return computev1alpha1.MachineCondition{ | ||
| Type: "Ready", | ||
| Type: "MachineReady", | ||
| Status: status, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Search for any code that references the old "Ready" condition type for machines
rg -n --type=go '"Ready"' -C 2 | grep -i 'condition\|machine'Repository: ironcore-dev/ironcore
Length of output: 1053
🏁 Script executed:
#!/bin/bash
# Read the machine_controller.go file to see the condition type change
cat -n poollet/machinepoollet/controllers/machine_controller.go | sed -n '540,570p'Repository: ironcore-dev/ironcore
Length of output: 1546
🏁 Script executed:
#!/bin/bash
# Search for condition type constant definitions
rg -n --type=go 'Condition.*=.*"' | grep -i 'type.*condition' | head -20Repository: ironcore-dev/ironcore
Length of output: 47
🏁 Script executed:
#!/bin/bash
# Search for MachineCondition type definition and usage patterns
rg -n --type=go 'type.*Condition' | grep -i machine | head -20Repository: ironcore-dev/ironcore
Length of output: 1587
🏁 Script executed:
#!/bin/bash
# Read the MachineConditionType definition and constants
cat -n api/compute/v1alpha1/machine_types.go | sed -n '185,230p'Repository: ironcore-dev/ironcore
Length of output: 2517
🏁 Script executed:
#!/bin/bash
# Search for MachineConditionType constants
rg -n --type=go 'MachineConditionType\s*=' api/compute/v1alpha1/machine_types.goRepository: ironcore-dev/ironcore
Length of output: 47
🏁 Script executed:
#!/bin/bash
# Search for all condition type strings in machine_controller.go
rg -n --type=go '".*Ready"' poollet/machinepoollet/controllers/machine_controller.goRepository: ironcore-dev/ironcore
Length of output: 575
🏁 Script executed:
#!/bin/bash
# Search for "Ready" condition type usage (not "MachineReady", "VolumesReady", etc.)
rg -n --type=go 'Type.*"Ready"'Repository: ironcore-dev/ironcore
Length of output: 47
🏁 Script executed:
#!/bin/bash
# Search more broadly for "Ready" condition type in both api and internal versions
rg -n '"Ready"' --type=go api/compute/v1alpha1/ internal/apis/compute/ | grep -i conditionRepository: ironcore-dev/ironcore
Length of output: 47
🏁 Script executed:
#!/bin/bash
# Check internal/apis/compute/machine_types.go for condition type definitions
cat -n internal/apis/compute/machine_types.go | sed -n '170,250p'Repository: ironcore-dev/ironcore
Length of output: 3600
🏁 Script executed:
#!/bin/bash
# Search for other controller files to check condition type patterns
find . -path ./vendor -prune -o -name '*_controller.go' -type f -print | grep -E '(machinepool|machine)' | head -5Repository: ironcore-dev/ironcore
Length of output: 401
🏁 Script executed:
#!/bin/bash
# Search for condition type usage in machinepool_controller.go
rg -n '".*Ready"' ./poollet/machinepoollet/controllers/machinepool_controller.go | head -20Repository: ironcore-dev/ironcore
Length of output: 47
🏁 Script executed:
#!/bin/bash
# Search for condition type usage in internal controllers
rg -n 'Type.*=.*".*Ready"' ./internal/controllers/compute/ --type=goRepository: ironcore-dev/ironcore
Length of output: 47
Use explicit type casting for consistency with other condition types in the same file.
Line 555 should use Type: computev1alpha1.MachineConditionType("MachineReady") to match the pattern used for VolumesReady (line 601) and NetworkInterfacesReady (line 643) in the same controller.
🤖 Prompt for AI Agents
In @poollet/machinepoollet/controllers/machine_controller.go around lines 554 -
556, Change the hard-coded string Type for the MachineReady condition to use the
explicit enum type like the other conditions: replace the current Type:
"MachineReady" in the Machine condition construction with Type:
computev1alpha1.MachineConditionType("MachineReady") so it matches the pattern
used for VolumesReady and NetworkInterfacesReady within the same controller.
| if status == corev1.ConditionTrue { | ||
| for _, vs := range volumeStatuses { | ||
| if vs.LastStateTransitionTime != nil { | ||
| lastTransitionTime = vs.LastStateTransitionTime | ||
| break | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Transition time selection may not reflect the actual condition transition.
When all volumes are ready, the code takes the LastStateTransitionTime from the first volume with a non-nil value. This may not represent when the condition actually transitioned (e.g., when the last volume became ready). Consider using the most recent (maximum) transition time across all volumes for a more accurate LastTransitionTime.
💡 Suggested approach
if status == corev1.ConditionTrue {
for _, vs := range volumeStatuses {
- if vs.LastStateTransitionTime != nil {
- lastTransitionTime = vs.LastStateTransitionTime
- break
+ if vs.LastStateTransitionTime != nil && (lastTransitionTime == nil || vs.LastStateTransitionTime.After(lastTransitionTime.Time)) {
+ lastTransitionTime = vs.LastStateTransitionTime
}
}
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if status == corev1.ConditionTrue { | |
| for _, vs := range volumeStatuses { | |
| if vs.LastStateTransitionTime != nil { | |
| lastTransitionTime = vs.LastStateTransitionTime | |
| break | |
| } | |
| } | |
| } | |
| if status == corev1.ConditionTrue { | |
| for _, vs := range volumeStatuses { | |
| if vs.LastStateTransitionTime != nil && (lastTransitionTime == nil || vs.LastStateTransitionTime.After(lastTransitionTime.Time)) { | |
| lastTransitionTime = vs.LastStateTransitionTime | |
| } | |
| } | |
| } |
🤖 Prompt for AI Agents
In @poollet/machinepoollet/controllers/machine_controller.go around lines 586 -
593, The current block that sets lastTransitionTime when status ==
corev1.ConditionTrue takes the first non-nil vs.LastStateTransitionTime from
volumeStatuses which can understate when the condition actually transitioned;
instead iterate all volumeStatuses and compute the most recent (maximum)
LastStateTransitionTime (compare pointers/dereferenced times safely, skipping
nils) and assign that max to lastTransitionTime so it reflects the latest volume
transition when the overall condition is true.
| if status == corev1.ConditionTrue { | ||
| for _, nicStatus := range nicStatuses { | ||
| if nicStatus.LastStateTransitionTime != nil { | ||
| lastTransitionTime = nicStatus.LastStateTransitionTime | ||
| break | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same transition time selection concern applies here.
Similar to computeVolumesReadyCondition, consider using the most recent LastStateTransitionTime when all network interfaces are ready.
🤖 Prompt for AI Agents
In @poollet/machinepoollet/controllers/machine_controller.go around lines 628 -
635, When status == corev1.ConditionTrue and iterating nicStatuses to set
lastTransitionTime, the code currently picks the first non-nil
LastStateTransitionTime; instead choose the most recent timestamp (the max)
among all nicStatus.LastStateTransitionTime values similar to
computeVolumesReadyCondition. Update the loop over nicStatuses to compare each
non-nil nicStatus.LastStateTransitionTime and assign lastTransitionTime to the
latest (largest) time, leaving lastTransitionTime nil if none found.
| func (r *MachineReconciler) mergeMachineConditions( | ||
| existing []computev1alpha1.MachineCondition, | ||
| current []computev1alpha1.MachineCondition, | ||
| ) []computev1alpha1.MachineCondition { | ||
| updated := append([]computev1alpha1.MachineCondition{}, existing...) | ||
|
|
||
| lastStatus := make(map[string]corev1.ConditionStatus, len(existing)) | ||
| for _, c := range existing { | ||
| lastStatus[string(c.Type)] = c.Status | ||
| } | ||
|
|
||
| for _, newCond := range current { | ||
| conditionType := string(newCond.Type) | ||
| if prev, ok := lastStatus[conditionType]; !ok || prev != newCond.Status { | ||
| updated = append(updated, newCond) | ||
| lastStatus[conditionType] = newCond.Status | ||
| } | ||
| } | ||
|
|
||
| return updated | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unbounded condition history growth.
The merge logic appends new conditions when the status changes but never removes or replaces older conditions of the same type. Over time, this will cause the Conditions slice to grow unboundedly as the machine transitions between states.
For example, a machine transitioning Pending → Running → Terminating will accumulate three MachineReady conditions rather than maintaining one with updated LastTransitionTime.
Consider either:
- Replace the existing condition of the same type (standard Kubernetes pattern)
- Cap the history to a maximum number of entries per type
- Deduplicate by removing old entries of the same type before appending
🛠️ Option 1: Replace existing condition (standard pattern)
func (r *MachineReconciler) mergeMachineConditions(
existing []computev1alpha1.MachineCondition,
current []computev1alpha1.MachineCondition,
) []computev1alpha1.MachineCondition {
- updated := append([]computev1alpha1.MachineCondition{}, existing...)
-
- lastStatus := make(map[string]corev1.ConditionStatus, len(existing))
- for _, c := range existing {
- lastStatus[string(c.Type)] = c.Status
- }
-
- for _, newCond := range current {
- conditionType := string(newCond.Type)
- if prev, ok := lastStatus[conditionType]; !ok || prev != newCond.Status {
- updated = append(updated, newCond)
- lastStatus[conditionType] = newCond.Status
+ conditionMap := make(map[computev1alpha1.MachineConditionType]computev1alpha1.MachineCondition, len(existing))
+ for _, c := range existing {
+ conditionMap[c.Type] = c
+ }
+ for _, newCond := range current {
+ if prev, ok := conditionMap[newCond.Type]; !ok || prev.Status != newCond.Status {
+ conditionMap[newCond.Type] = newCond
+ }
+ }
+ updated := make([]computev1alpha1.MachineCondition, 0, len(conditionMap))
+ for _, c := range conditionMap {
+ updated = append(updated, c)
}
-
return updated
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| func (r *MachineReconciler) mergeMachineConditions( | |
| existing []computev1alpha1.MachineCondition, | |
| current []computev1alpha1.MachineCondition, | |
| ) []computev1alpha1.MachineCondition { | |
| updated := append([]computev1alpha1.MachineCondition{}, existing...) | |
| lastStatus := make(map[string]corev1.ConditionStatus, len(existing)) | |
| for _, c := range existing { | |
| lastStatus[string(c.Type)] = c.Status | |
| } | |
| for _, newCond := range current { | |
| conditionType := string(newCond.Type) | |
| if prev, ok := lastStatus[conditionType]; !ok || prev != newCond.Status { | |
| updated = append(updated, newCond) | |
| lastStatus[conditionType] = newCond.Status | |
| } | |
| } | |
| return updated | |
| } | |
| func (r *MachineReconciler) mergeMachineConditions( | |
| existing []computev1alpha1.MachineCondition, | |
| current []computev1alpha1.MachineCondition, | |
| ) []computev1alpha1.MachineCondition { | |
| conditionMap := make(map[computev1alpha1.MachineConditionType]computev1alpha1.MachineCondition, len(existing)) | |
| for _, c := range existing { | |
| conditionMap[c.Type] = c | |
| } | |
| for _, newCond := range current { | |
| if prev, ok := conditionMap[newCond.Type]; !ok || prev.Status != newCond.Status { | |
| conditionMap[newCond.Type] = newCond | |
| } | |
| } | |
| updated := make([]computev1alpha1.MachineCondition, 0, len(conditionMap)) | |
| for _, c := range conditionMap { | |
| updated = append(updated, c) | |
| } | |
| return updated | |
| } |
🤖 Prompt for AI Agents
In @poollet/machinepoollet/controllers/machine_controller.go around lines 651 -
671, The mergeMachineConditions function currently appends new conditions and
never removes old ones, causing unbounded growth; change it to replace or update
the existing condition entry for the same Type instead of appending: build a map
from condition Type to its index from the existing slice, then for each newCond
in current either overwrite the existing entry at that index (updating Status,
Reason, Message, and LastTransitionTime appropriately) or insert if not present;
ensure you update lastStatus bookkeeping (the same map or recompute) so
duplicate types are not left behind and the Conditions slice remains
deduplicated per Type.
lastTransitionTime for volume and nic in the machineConditionlastTransitionTime for volume and nic in the machineCondition
Proposed Changes
machineConditionfor volume and nic with the respectivelastStateTransitionTimemachineConditionimplementation to persist theConditionshistory/ref Integrate
conditionsAPI for all relevant API resources in IronCore #1343Summary by CodeRabbit
Bug Fixes
Tests
✏️ Tip: You can customize this high-level summary in your review settings.