feat: provider plugin architecture + Netlify adapter#545
feat: provider plugin architecture + Netlify adapter#545seanspeaks wants to merge 62 commits intofeature/integration-router-v2-drop-modules-routerfrom
Conversation
…atform deployment Introduces the foundation for deploying Frigg on Netlify: - Queue provider abstraction (QueueProvider interface, factory, SQS/Netlify/QStash adapters) with appDefinition.queue.provider support - Netlify handler wrapper (create-netlify-handler.js) - equivalent of create-handler.js without AWS Secrets Manager dependency - Netlify app handler helper with serverless-http bridge - Split Netlify function entry points (auth, user) for per-concern cold starts - netlify-adapter package structure at packages/devtools/netlify-adapter/ Work in progress - additional function entry points, config generator, and Netlify DB support still pending. https://claude.ai/code/session_01G5MdKSX6tG2NhYHVbYpjq9
…pport, and config generator - Split Netlify Function entry points per concern: auth (with v2 routes), user, health, admin, docs, integration-routes, webhooks - Background function queue worker that routes messages to correct integration - Netlify scheduler adapter (poll-and-dispatch pattern via cron + database) - Updated scheduler factory with 'netlify' provider support - netlify.toml generator with v2 API route redirects - Netlify DB (Neon PostgreSQL) validation and documentation - Scheduled function for cron-based sync dispatching - Package index exporting all adapter utilities https://claude.ai/code/session_01G5MdKSX6tG2NhYHVbYpjq9
Design for making all deployment targets (AWS, Netlify, Vercel, GCP, Azure, fly.io, CloudFlare, Docker/local) installable plugin packages with a consistent shape. Includes provider interface, database plugin interface, file-level migration map, and phased implementation plan. Grounded in existing multi-provider patterns (queue factory, scheduler factory, infrastructure CloudProviderFactory, netlify-adapter). https://claude.ai/code/session_01G5MdKSX6tG2NhYHVbYpjq9
Concrete mapping tables for provider-vercel (QStash already exists), provider-gcp (Cloud Functions v2 + Cloud Tasks + Cloud KMS), and provider-local (Express + Docker Compose for OpenClaw and local dev). Tiered provider list: T1 (AWS, Netlify, local), T2 (Vercel, GCP), T3 stubs (Azure, CloudFlare, fly.io), T4 future (Railway, Render, etc). https://claude.ai/code/session_01G5MdKSX6tG2NhYHVbYpjq9
- Provider interface gains deploy(), preflightCheck(), teardown() - Deploy matrix showing what each provider runs under the hood - Difficulty assessment: fly.io (easy/days), Azure (medium/weeks), CloudFlare (hard/significant — different runtime, not Node.js) - CloudFlare caveat: V8 isolates can't use Express, may need router adapter or build-time translation - Updated phases to include Vercel + GCP as Phase 6 https://claude.ai/code/session_01G5MdKSX6tG2NhYHVbYpjq9
| app.use(bodyParser.urlencoded({ extended: true })); | ||
| app.use( | ||
| cors({ | ||
| origin: '*', |
Check warning
Code scanning / CodeQL
Permissive CORS configuration
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 16 days ago
In general, to fix a permissive CORS configuration you should avoid origin: '*' (especially with credentials: true) and instead either (a) disable CORS when not needed, or (b) explicitly whitelist allowed origins, typically via a list that can come from configuration. You should also avoid other wildcard settings like allowedHeaders: '*' and methods: '*' when you know the specific headers and methods you expect.
For this file, the least intrusive, robust fix is to (1) introduce a small whitelist-based origin function that reads allowed origins from an environment variable (or uses a safe default of no cross-origin access), and (2) replace the wildcard origin, allowedHeaders, and methods with explicit, commonly used values. That preserves functionality for allowed origins while removing the blanket '*'. Specifically:
- Add a helper
getCorsOriginthat readsprocess.env.ALLOWED_CORS_ORIGINS(comma-separated list), trims the entries, and returns a function(origin, callback)compatible with thecorspackage. If no env var is set, default tofalse(CORS disabled). - Use that helper in the
corsoptions:origin: getCorsOrigin(). - Replace
allowedHeaders: '*'andmethods: '*'with explicit lists such as['Content-Type', 'Authorization', 'X-Requested-With']and['GET', 'POST', 'PUT', 'PATCH', 'DELETE', 'OPTIONS']. This is restrictive enough to address the warning but broad enough to avoid breaking typical API clients.
All changes occur inside packages/devtools/netlify-adapter/lib/create-netlify-app-handler.js, near the top where the app and CORS middleware are defined.
| @@ -13,6 +13,37 @@ | ||
| const { flushDebugLog } = require('@friggframework/core/logs'); | ||
| const { createNetlifyHandler } = require('./create-netlify-handler'); | ||
|
|
||
| const getCorsOrigin = () => { | ||
| const allowedOriginsEnv = process.env.ALLOWED_CORS_ORIGINS; | ||
|
|
||
| if (!allowedOriginsEnv) { | ||
| // No explicit CORS configuration: disable cross-origin requests by default | ||
| return false; | ||
| } | ||
|
|
||
| const allowedOrigins = allowedOriginsEnv | ||
| .split(',') | ||
| .map((o) => o.trim()) | ||
| .filter((o) => o.length > 0); | ||
|
|
||
| if (allowedOrigins.length === 0) { | ||
| return false; | ||
| } | ||
|
|
||
| return function (origin, callback) { | ||
| // Allow same-origin or non-browser requests (no origin header) | ||
| if (!origin) { | ||
| return callback(null, true); | ||
| } | ||
|
|
||
| if (allowedOrigins.includes(origin)) { | ||
| return callback(null, true); | ||
| } | ||
|
|
||
| return callback(null, false); | ||
| }; | ||
| }; | ||
|
|
||
| const createNetlifyApp = (applyMiddleware) => { | ||
| const app = express(); | ||
|
|
||
| @@ -20,9 +51,9 @@ | ||
| app.use(bodyParser.urlencoded({ extended: true })); | ||
| app.use( | ||
| cors({ | ||
| origin: '*', | ||
| allowedHeaders: '*', | ||
| methods: '*', | ||
| origin: getCorsOrigin(), | ||
| allowedHeaders: ['Content-Type', 'Authorization', 'X-Requested-With'], | ||
| methods: ['GET', 'POST', 'PUT', 'PATCH', 'DELETE', 'OPTIONS'], | ||
| credentials: true, | ||
| }) | ||
| ); |
Implements all remaining provider plugin interface properties for the Netlify adapter, bringing it from ~80% to 100% coverage of the interface defined in plan.md. New capabilities: - deploy(), preflightCheck(), teardown() — full deploy lifecycle - detect() — auto-detect Netlify runtime via process.env.NETLIFY - loadSecrets() — no-op (Netlify injects env vars natively) - invokeFunctionAdapter — HTTP-based cross-function invocation - getFunctionEntryPoints() — programmatic access to function templates - validateNetlifyConfig() — full appDefinition validation (DB, encryption, WebSocket, VPC, SSM, queue provider compatibility) - ScheduledJobRepository — Prisma-backed persistence for poll-and-dispatch scheduler (replaces in-memory-only fallback) - scheduled-sync.js — real implementation wired to scheduler + queue - Cron schedule section in generateNetlifyToml() - ScheduledJob model in both PostgreSQL and MongoDB Prisma schemas index.js now exports the complete provider plugin shape: name, createHandler, createAppHandler, QueueProvider, SchedulerAdapter, CryptorAdapter, loadSecrets, invokeFunctionAdapter, WebSocketAdapter, generateConfig, generateEnvTemplate, infrastructureBuilders, deploy, preflightCheck, teardown, validate, getFunctionEntryPoints, detect, recommendedDatabases, providedEnvVars, utils 59 tests across 7 test suites. https://claude.ai/code/session_01G5MdKSX6tG2NhYHVbYpjq9
| */ | ||
| function isCommandAvailable(command) { | ||
| try { | ||
| execSync(`which ${command}`, { stdio: 'ignore' }); |
There was a problem hiding this comment.
The which command is Unix-specific and will fail on Windows systems, breaking the preflight check.
Impact: Deployment will fail on Windows with 'which' is not recognized as an internal or external command.
Fix:
function isCommandAvailable(command) {
try {
const cmd = process.platform === 'win32' ? 'where' : 'which';
execSync(`${cmd} ${command}`, { stdio: 'ignore' });
return true;
} catch {
return false;
}
}Spotted by Graphite
Is this helpful? React 👍 or 👎 to let us know.
…rovider-netlify
Aligns with the provider plugin naming convention so the provider
registry can resolve via require(`@friggframework/provider-${name}`).
- Rename packages/devtools/netlify-adapter → packages/devtools/provider-netlify
- Update package.json name to @friggframework/provider-netlify
- Update all internal references and JSDoc comments
- Update plan.md file location tables and status section
https://claude.ai/code/session_01G5MdKSX6tG2NhYHVbYpjq9
| const prismaClient = await connectPrisma(); | ||
|
|
||
| const repository = new ScheduledJobRepository({ prismaClient }); | ||
| const queueProvider = createQueueProvider(); |
There was a problem hiding this comment.
Queue provider defaults to SQS when no arguments are passed, which will fail on Netlify. The createQueueProvider() factory defaults to QUEUE_PROVIDERS.SQS when no provider is specified (see queue-provider-factory.js line 50). This Netlify-specific function should explicitly specify the queue provider:
const queueProvider = createQueueProvider({
provider: 'netlify-background'
});Or load from app definition:
const { queue } = loadAppDefinition();
const queueProvider = createQueueProvider({
appDefinition: { queue }
});Without this fix, the function will attempt to instantiate SqsQueueProvider which requires AWS SDK and credentials, causing failures in Netlify's runtime.
Spotted by Graphite
Is this helpful? React 👍 or 👎 to let us know.
| workerMap[name] = createQueueWorker(IntegrationClass); | ||
| } | ||
|
|
||
| const queueProvider = createQueueProvider(); |
There was a problem hiding this comment.
Queue provider defaults to SQS when no arguments are passed, which will fail on Netlify. The createQueueProvider() factory defaults to QUEUE_PROVIDERS.SQS when no provider is specified (see queue-provider-factory.js line 50). This Netlify-specific worker should explicitly specify the queue provider:
const queueProvider = createQueueProvider({
provider: 'netlify-background'
});Or load from app definition:
const { queue } = loadAppDefinition();
const queueProvider = createQueueProvider({
appDefinition: { queue }
});Without this fix, the function will attempt to instantiate SqsQueueProvider which requires AWS SDK and credentials, causing failures when parsing events in Netlify's runtime.
Spotted by Graphite
Is this helpful? React 👍 or 👎 to let us know.
- Use `where` on Windows instead of Unix-only `which` for command detection - Explicitly pass provider: 'netlify-background' to createQueueProvider() so it doesn't default to SQS in Netlify's runtime https://claude.ai/code/session_01G5MdKSX6tG2NhYHVbYpjq9
Same issue as scheduled-sync.js — createQueueProvider() without arguments defaults to SQS, which requires AWS SDK and will fail in Netlify's runtime. https://claude.ai/code/session_01G5MdKSX6tG2NhYHVbYpjq9
…der packages
Adds the missing link between `appDefinition.provider` and the
`@friggframework/provider-{name}` packages in node_modules:
- `resolveProvider(appDefinition)` loads the right provider plugin
- `loadAppDefinition()` now returns the full appDefinition object
(backward compatible — integrations/userConfig still destructure)
- Schema updated to accept 'netlify' alongside 'aws'
Convention: `provider: 'netlify'` → `require('@friggframework/provider-netlify')`
This enables CLI commands (frigg deploy, frigg start) to delegate to
the provider's deploy(), generateConfig(), validate() etc. instead
of hardcoding AWS behavior.
https://claude.ai/code/session_01G5MdKSX6tG2NhYHVbYpjq9
…spatch All six provider-sensitive CLI commands now check appDefinition.provider and delegate to the installed provider package: - `frigg deploy` → provider.deploy() (validate → preflight → deploy) - `frigg start` → provider-specific dev server (e.g. `netlify dev`) - `frigg build` → provider.generateConfig() + getFunctionEntryPoints() - `frigg doctor` → AWS-only guard (CloudFormation stacks) - `frigg repair` → AWS-only guard (CloudFormation stacks) - `frigg generate-iam` → AWS-only guard (IAM) For AWS (default), all commands fall through to existing behavior unchanged. For Netlify or other providers, commands delegate to the provider plugin. Adds utils/provider-helper.js as the shared CLI helper that loads the appDefinition and resolves the provider package. https://claude.ai/code/session_01G5MdKSX6tG2NhYHVbYpjq9
9 tests covering the provider routing logic: Deploy command (5 tests): - Falls through to osls for AWS (no provider / explicit 'aws') - Delegates to provider.deploy() for non-AWS - Exits on validation errors - Exits on deploy failure with helpful error messages Build command (2 tests): - Falls through to osls for AWS - Delegates to provider.generateConfig() + getFunctionEntryPoints() AWS-only guards (2 tests): - repair rejects non-AWS provider - generate-iam rejects non-AWS provider Also fixes build.test.js to match linter-updated log message. https://claude.ai/code/session_01G5MdKSX6tG2NhYHVbYpjq9
| for (const schedule of dueSchedules) { | ||
| try { | ||
| if (this.queueProvider) { | ||
| await this.queueProvider.send( | ||
| schedule.payload, | ||
| schedule.queueResourceId | ||
| ); | ||
| } else { | ||
| console.log( | ||
| `[NetlifyScheduler] No queue provider — logging payload for: ${schedule.scheduleName}`, | ||
| JSON.stringify(schedule.payload) | ||
| ); | ||
| } | ||
|
|
||
| await this.repository.delete(schedule.scheduleName); | ||
| processed++; |
There was a problem hiding this comment.
Race condition: If queueProvider.send() succeeds but repository.delete() fails (database error, network timeout, etc.), the schedule remains in the database with state='PENDING' and will be re-dispatched on the next cron run, causing duplicate job execution.
Fix: Update the schedule state to 'PROCESSING' before dispatching, then delete after success:
for (const schedule of dueSchedules) {
try {
// Mark as processing first
await this.repository.save({
...schedule,
state: 'PROCESSING'
});
if (this.queueProvider) {
await this.queueProvider.send(
schedule.payload,
schedule.queueResourceId
);
}
// Only delete after successful dispatch
await this.repository.delete(schedule.scheduleName);
processed++;
} catch (error) {
// Update state to FAILED instead of deleting
await this.repository.save({
...schedule,
state: 'FAILED'
}).catch(() => {});
errors++;
}
}Spotted by Graphite
Is this helpful? React 👍 or 👎 to let us know.
Race condition: if queueProvider.send() succeeds but repository.delete() fails, the schedule stays PENDING and gets re-dispatched on the next cron tick, causing duplicate job execution. Fix: mark schedule as PROCESSING before dispatch. Since findDue() only queries state='PENDING', a concurrent cron tick cannot re-pick it. On failure, mark as FAILED instead of leaving in PENDING. State machine: PENDING → PROCESSING → (deleted) or FAILED Also adds 23 unit tests covering the full adapter: constructor, CRUD, dispatch ordering, failure modes, and the multi-schedule scenario. https://claude.ai/code/session_01G5MdKSX6tG2NhYHVbYpjq9
Covers the provider plugin interface, resolution chain, CLI command dispatch, scheduler push vs poll-and-dispatch patterns, and the PROCESSING state guard for duplicate dispatch prevention. Includes: alternatives considered, risks/mitigations, test coverage matrix, and instructions for adding new providers. https://claude.ai/code/session_01G5MdKSX6tG2NhYHVbYpjq9
The scheduler barrel (index.js) eagerly required all adapter files,
including eventbridge-scheduler-adapter.js which imports
@aws-sdk/client-scheduler. This caused builds on non-AWS platforms
(Netlify) to fail with "Cannot find module '@aws-sdk/client-scheduler'"
even though the EventBridge adapter is never used there.
Fix:
- Barrel exports use lazy getters for adapter classes
- Factory uses inline require() inside each switch case
- Consumers that only use createSchedulerService (the factory) or
specific adapters by name never trigger unrelated SDK loads
The import chain that was breaking:
core/index → application → scheduler-commands → scheduler/index
→ eventbridge-scheduler-adapter → @aws-sdk/client-scheduler BOOM
After fix:
core/index → application → scheduler-commands → scheduler/index
→ (factory + interface only, adapters deferred)
Includes 5 tests verifying the lazy loading behavior.
https://claude.ai/code/session_01G5MdKSX6tG2NhYHVbYpjq9
Three fixes for the Netlify deploy failure: 1. createFriggInfrastructure() now checks appDefinition.provider before entering the AWS CloudFormation pipeline. Non-AWS providers get routed to provider.validate() + generateConfig() + getFunctionEntryPoints() instead of composeServerlessDefinition(). This is the root cause — the backend's `npm run build` calls `node infrastructure.js package` directly, bypassing the CLI's buildCommand() where provider detection already existed. 2. netlify.toml generator adds external_node_modules for express, body-parser, cors, serverless-http, @prisma/client, mongoose. Netlify's esbuild bundler cannot resolve these from within @friggframework/core's node_modules — marking them external tells esbuild to leave them as runtime requires from node_modules. 3. Fix bad require path in encryption-schema-registry.js: '../integrations/utils/map-integration-dto' (resolves to database/integrations/...) → '../../integrations/utils/...' (resolves to core/integrations/...). This caused esbuild to fail with "Could not resolve" during Netlify function bundling. https://claude.ai/code/session_01G5MdKSX6tG2NhYHVbYpjq9
… to packages/providers/ Architecture change: make @friggframework/core provider-agnostic by extracting all AWS SDK-dependent adapters into @friggframework/provider-aws and organizing provider packages under packages/providers/. New package structure: packages/providers/aws/ → @friggframework/provider-aws (new) packages/providers/netlify/ → @friggframework/provider-netlify (moved from devtools/) What moved to provider-aws: - EventBridgeSchedulerAdapter (from core/infrastructure/scheduler/) - SqsQueueProvider (from core/queues/providers/) - LambdaInvoker (from core/database/adapters/) - MigrationStatusRepositoryS3 (from core/database/repositories/) What stays in core but now lazy-loads AWS SDKs: - Cryptor (KMS import deferred to generateDataKey/decryptDataKey) - Worker (SQS import deferred to first method call) - QueuerUtil (SQS import deferred to first send/batchSend) - WebSocket repos (API Gateway import deferred to getActiveConnections) - Health router (KMS already lazy) Core changes: - Removed @aws-sdk/* from core dependencies - Added @friggframework/provider-aws as optional peerDependency - Scheduler/queue factories lazy-load from provider-aws - Updated workspace + lerna config: packages/providers/* https://claude.ai/code/session_01G5MdKSX6tG2NhYHVbYpjq9
Complete the clean separation of AWS dependencies from @friggframework/core by introducing proper port/adapter interfaces following DDD principles. New interfaces (ports) in core: - QueueClientInterface: sendMessage, sendMessageBatch, getQueueUrl - EncryptionKeyProviderInterface: generateDataKey, decryptDataKey - WebSocketMessageSenderInterface: send (with StaleConnectionError) - AesEncryptionKeyProvider: core adapter for local AES encryption New adapters in @friggframework/provider-aws: - SqsQueueClient: SQS implementation of QueueClientInterface - KmsEncryptionKeyProvider: KMS implementation of EncryptionKeyProviderInterface - ApiGatewayMessageSender: API Gateway impl of WebSocketMessageSenderInterface - KMS health check: VPC detection + KMS capability check extracted from health.js Updated core classes to use interfaces (all backward-compatible): - Worker: accepts optional queueClient, lazy-loads SqsQueueClient as default - Cryptor: accepts optional keyProvider, lazy-loads KMS or AES provider based on shouldUseAws flag (existing constructor API preserved) - QueuerUtil: delegates to SqsQueueClient, adds setQueueClient() for DI - WebSocket repos (4 Prisma variants + 1 Mongoose): accept optional messageSender, lazy-load ApiGatewayMessageSender as default - health.js: lazy-loads KMS health check from provider-aws Zero @aws-sdk imports remain in @friggframework/core source files. All existing tests pass with no regressions. https://claude.ai/code/session_01G5MdKSX6tG2NhYHVbYpjq9
…ter injection
BREAKING CHANGE: All AWS adapter dependencies must now be explicitly
injected rather than auto-discovered via lazy-loading. This completes
the hexagonal architecture separation between @friggframework/core
(ports/interfaces) and @friggframework/provider-aws (adapters).
Changes:
- Worker: constructor requires { queueClient } for queue operations
- Cryptor: shouldUseAws=true requires explicit { keyProvider }
- QueuerUtil: setQueueClient() must be called before send/batchSend
- WebSocket repos: constructor requires messageSender for send ops
- WebsocketConnection model: setMessageSender() must be called first
- All error messages include migration instructions + ADR reference
Also fixes:
- QueuerUtil.batchSend buffer mutation bug (passed reference then spliced)
- Cryptor tests: all 12 pass (was 0/7 previously)
- QueuerUtil tests: all 8 pass (was 6/7 previously)
- Worker tests: all 10 pass with mock QueueClientInterface
- Tests no longer depend on aws-sdk-client-mock
ADR: docs/architecture-decisions/010-decouple-aws-from-core.md
https://claude.ai/code/session_01G5MdKSX6tG2NhYHVbYpjq9
- Fix wrong ADR path in 8 source files (docs/adr/001 → docs/architecture-decisions/010) - Fix missing closing brace in start-command/index.js (startWithProvider never closed) - Add missing uuid dependency in provider-aws/package.json - Fix undefined workerFunctionName reference in db-migration router - Add AWS client reuse in KMS and API Gateway adapters (cache per process/endpoint) - Remove SqsQueueProvider re-exports from core queues/index.js (undermined decoupling) - Rewrite WebSocket repo test to use mock MessageSenderInterface (no aws-sdk-client-mock) - Move lambda-invoker and migration-status-repository-s3 tests to provider-aws package - Fix ADR-MULTI-PROVIDER-SUPPORT: correct package paths, align with ADR-010 on breaking changes - Fix wildcard @friggframework/core dependency in provider-netlify package.json https://claude.ai/code/session_01G5MdKSX6tG2NhYHVbYpjq9
…e by nft
On Netlify, nft traces static require() calls to determine what goes in
each function zip. loadAppDefinition() uses process.cwd() discovery
which nft can't trace, so the backend and its dependencies were excluded
from function bundles. The workaround was included_files: ["backend/**"]
which bloated every function zip with the entire backend.
Fix:
- Add setAppDefinition() to core's app-definition-loader.js. When set,
loadAppDefinition() returns the cached definition without filesystem
discovery. Backwards compatible — AWS and local dev still use the
process.cwd() fallback.
- Inject a preamble into every generated Netlify function entry point:
const { setAppDefinition } = require('...app-definition-loader');
const { Definition } = require('../../backend/index.js');
setAppDefinition(Definition);
The static require('../../backend/index.js') lets nft trace the
backend's dependency tree into each function bundle automatically.
- Remove backend/** from included_files in generated netlify.toml —
nft now handles this through normal dependency tracing.
https://claude.ai/code/session_01G5MdKSX6tG2NhYHVbYpjq9
…tory factories
@friggframework/core eagerly required all database implementations
at module load time, including MongoDB/DocumentDB modules, regardless
of which database the consuming app uses. This causes a runtime crash
(Cannot find module 'mongodb') in PostgreSQL-only environments where
mongodb/mongoose aren't installed — specifically Netlify Functions.
Fix:
- All 15 repository factory files now lazy-require Mongo/DocumentDB
implementations inside their switch case branches, so they only
load when DB_TYPE is 'mongodb' or 'documentdb'
- Factory module.exports use lazy getters for Mongo/DocumentDB class
re-exports (for direct testing), avoiding eager loading on require
- core/index.js uses lazy getters for all MongoDB-related symbols
(mongoose, IndividualUser, OrganizationUser, UserModel, etc.)
so they only load when accessed, not at require('@friggframework/core')
The PostgreSQL and Prisma code paths remain eagerly loaded since
they're always needed.
https://claude.ai/code/session_01G5MdKSX6tG2NhYHVbYpjq9
…ations
The previous lazy-loading commit missed two critical eager require paths:
1. modules/entity.js → database/mongoose.js → require('mongoose')
This was pulled in via modules/index.js → core/index.js on every
require('@friggframework/core'), even for PostgreSQL-only deployments.
2. Five factory files still eagerly imported their DocumentDB implementations
at the top level. DocumentDB repos import documentdb-utils.js which
requires the 'mongodb' driver package.
Changes:
- modules/index.js: Entity is now a lazy getter
- core/index.js: Entity export is now a lazy getter, removed stale
Credential re-export (was always undefined — legacy artifact)
- credential-repository-factory.js: lazy-load DocumentDB impl
- module-repository-factory.js: lazy-load DocumentDB impl
- process-repository-factory.js: lazy-load DocumentDB impl
- admin-process-repository-factory.js: lazy-load DocumentDB impl
- script-schedule-repository-factory.js: lazy-load DocumentDB impl
Verified: require('@friggframework/core') now loads zero mongoose/mongodb
modules. Accessing core.Entity correctly triggers lazy load of mongoose.
https://claude.ai/code/session_01G5MdKSX6tG2NhYHVbYpjq9
…dependency refactor: replace mongodb dependency by bson
When CloudFormation logical IDs for SubnetRouteTableAssociation change
(e.g., during framework version updates), the physical associations can
be silently lost. Subnets fall back to the main route table (IGW only),
causing Lambda functions in VPC to lose internet access via NAT Gateway.
Changes:
- Track routeTableAssociationCount in CloudFormation discovery results
- Add self-healing in resource discovery: when vpc.selfHeal is enabled
and route table has 0 subnet associations, re-associate via EC2 API
before CloudFormation runs (so CF sees matching state, no conflict)
- Fix ensureSubnetAssociations call passing {} instead of
discoveredResources in vpc-builder endpoint creation path
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…e-association-self-heal fix(infra): self-heal VPC subnet-route table association drift
Merge resolution strategy: - Accept deletions: Mongoose models and related tests removed in next (model.js files, mongoose.js, encryption-integration test, password tests) - Remove Mongoose lazy-loading: database/index.js, core/index.js, modules/index.js no longer export Mongoose symbols since the models are deleted - Accept next's Prisma-based implementations: health-check repositories, mongodb-collection-utils tests, health router tests - Accept next's type definitions: removed Mongoose Model imports - Keep our branch additions: queue providers, encryption key interfaces, websocket interfaces, provider resolution - package.json: replace mongoose with lodash.get + bson (from next) - cloudformation-discovery.test.js: accept next's stack naming https://claude.ai/code/session_01G5MdKSX6tG2NhYHVbYpjq9
…tlify compatibility database/config.js and encryption-schema-registry.js were calling findNearestBackendPackageJson() directly, bypassing the setAppDefinition() cache. On Netlify, process.cwd() is /var/task at runtime and backend/ files aren't in the deployment bundle, so cwd-based discovery fails. By routing through loadAppDefinition(), these callers now use the cached app definition set by the provider-netlify preamble (setAppDefinition()), making database type detection and custom encryption schema loading work correctly in Netlify's serverless environment. https://claude.ai/code/session_01G5MdKSX6tG2NhYHVbYpjq9
| result = result[key]; | ||
| } | ||
| return result === undefined ? defaultValue : result; | ||
| } |
There was a problem hiding this comment.
Custom deepGet doesn't support bracket notation paths
Medium Severity
The custom deepGet function replacing lodash.get only splits paths on . (dots), so it doesn't handle bracket notation like 'a[0].b' or 'items[2].name'. lodash.get supported both 'a.b.c' and 'a[0].b' path formats. Any caller using bracket notation will silently get undefined instead of the expected value, since 'a[0]' becomes a single key lookup rather than indexing into an array.
|
|
||
| for (const key of requiredKeys) { | ||
| const val = lodashGet(o, key); | ||
| const val = deepGet(o, key); |
There was a problem hiding this comment.
getAll treats falsy values as missing keys
Low Severity
In getAll, the check if (val) treats legitimate falsy values like 0, false, and '' as missing keys. This existed before with lodash.get but is worth noting since this function is being touched — the replacement deepGet now correctly returns these falsy values, but getAll discards them and reports them as missing, potentially throwing spurious errors.
…e deps npm ci was failing because encoding@0.1.13 and iconv-lite@0.6.3 were missing from the lock file after the merge from next. https://claude.ai/code/session_01G5MdKSX6tG2NhYHVbYpjq9
The previous regeneration from scratch only included Linux platform entries for @nx/nx-* optional deps. CI with latest npm requires all platform variants (darwin, win32, freebsd) to have lock file entries. Restored the lock file from origin/next and ran npm install to incorporate current package.json changes on top. https://claude.ai/code/session_01G5MdKSX6tG2NhYHVbYpjq9
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 2 potential issues.
Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.
| return new AdminApiKeyRepositoryPostgres(); | ||
|
|
||
| case 'documentdb': | ||
| const { AdminApiKeyRepositoryDocumentDB } = require('./admin-api-key-repository-documentdb'); |
There was a problem hiding this comment.
Lazy require with const in switch cases lacks block scoping
Low Severity
Using const declarations inside switch case blocks without wrapping each case in curly braces means all declarations share the switch block's scope. While this works here because each variable name is unique and each case returns, it's fragile — a future refactor adding another case with a similarly-named destructured variable would cause a runtime SyntaxError. This pattern is repeated across all four repository factory files.
Additional Locations (2)
|
|
||
| database = backendModule?.Definition?.database; | ||
| const { appDefinition } = loadAppDefinition(); | ||
| const database = appDefinition?.database; |
There was a problem hiding this comment.
Circular dependency risk in config.js via loadAppDefinition
Medium Severity
getDatabaseType() now calls loadAppDefinition() from ../handlers/app-definition-loader. If app-definition-loader (or anything in its require chain) imports from ../../database/config — directly or transitively — this creates a circular dependency that may cause loadAppDefinition to be undefined at call time. The previous implementation used direct fs/path operations with no framework imports, avoiding this risk entirely.
CI runs `npm install -g npm@latest` (npm 11.x) before `npm ci`. npm 11 is stricter about optional dependencies like `encoding` and `iconv-lite` (optional deps of node-fetch) needing resolved entries in the lock file. Regenerated with npm 11.11.0 to match CI. https://claude.ai/code/session_01G5MdKSX6tG2NhYHVbYpjq9
… effects Previously, several core router modules called loadAppDefinition(), createIntegrationRouter(), or repository factory functions at module load time (require-time). This caused failures on Netlify and other environments where process.cwd() doesn't point to the app root, requiring consumers to call setAppDefinition() in a preamble before any require() of core routers. Now all initialization is deferred until first use: - auth.js: lazy getter for router and handler - admin.js: ensureInitialized() middleware before route handlers - user.js: ensureInitialized() middleware before route handlers - health.js: ensureInitialized() called in route handlers - integration-defined-routers.js: lazy getter for handlers - integration-webhook-routers.js: lazy getter for handlers - integration-defined-workers.js: lazy getter for handlers - websocket.js: lazy getter for repository This means setAppDefinition() can be called at any point before the first actual request, rather than needing to be called before any require() of core modules. The docs.js router already used this pattern and served as the reference implementation. https://claude.ai/code/session_01G5MdKSX6tG2NhYHVbYpjq9
|
You have run out of free Bugbot PR reviews for this billing cycle. This will reset on April 9. To receive reviews on all of your PRs, visit the Cursor dashboard to activate Pro and start your 14-day free trial. |
Add __dirname-based path resolution as fallback in loadPrismaClient so bundlers (Netlify nft/esbuild) can trace the generated Prisma client. Also include @friggframework/core/generated/** in Netlify included_files to ensure the pre-generated client is bundled into function deploys. https://claude.ai/code/session_01G5MdKSX6tG2NhYHVbYpjq9
Multiple fixes to address Netlify's 250MB bundle limit: - Switch Prisma engineType from "binary" to "library" (Node-API, smaller than standalone executables) and add debian-openssl-3.0.x binaryTarget for Netlify's Debian runtime alongside the existing rhel target for Lambda - Read database type from appDefinition.database so only the relevant Prisma generated client (postgresql or mongodb) is included in netlify.toml included_files, avoiding ~21MB of unused client - Rewrite @friggframework/core requires in Netlify function entry points to resolve through backend/node_modules/ instead of bare package specifiers, preventing pnpm monorepos from bundling two separate copies of the framework - Add aws-sdk and @aws-sdk/* to esbuild external_node_modules so AWS SDKs (only needed by provider-aws) are not bundled into Netlify functions https://claude.ai/code/session_01G5MdKSX6tG2NhYHVbYpjq9
…chemas Netlify builds on Debian but runs functions on Lambda (RHEL). The "native" target already covers the Debian build platform, so the explicit "debian-openssl-3.0.x" target was redundant and only added extra engine binaries to the npm package. Keep just ["native", "rhel-openssl-3.0.x"] — native handles whatever platform prisma generate runs on, rhel handles the Lambda runtime. https://claude.ai/code/session_01G5MdKSX6tG2NhYHVbYpjq9
|






Summary
Code changes
Architecture plan (plan.md)
Test plan
📦 Published PR as canary version:
2.0.0--canary.545.676b219.0✨ Test out this PR locally via:
npm install @friggframework/admin-scripts@2.0.0--canary.545.676b219.0 npm install @friggframework/ai-agents@2.0.0--canary.545.676b219.0 npm install @friggframework/core@2.0.0--canary.545.676b219.0 npm install @friggframework/devtools@2.0.0--canary.545.676b219.0 npm install @friggframework/e2e@2.0.0--canary.545.676b219.0 npm install @friggframework/eslint-config@2.0.0--canary.545.676b219.0 npm install @friggframework/prettier-config@2.0.0--canary.545.676b219.0 npm install @friggframework/provider-aws@2.0.0--canary.545.676b219.0 npm install @friggframework/provider-netlify@2.0.0--canary.545.676b219.0 npm install @friggframework/schemas@2.0.0--canary.545.676b219.0 npm install @friggframework/serverless-plugin@2.0.0--canary.545.676b219.0 npm install @friggframework/test@2.0.0--canary.545.676b219.0 npm install @friggframework/ui@2.0.0--canary.545.676b219.0 # or yarn add @friggframework/admin-scripts@2.0.0--canary.545.676b219.0 yarn add @friggframework/ai-agents@2.0.0--canary.545.676b219.0 yarn add @friggframework/core@2.0.0--canary.545.676b219.0 yarn add @friggframework/devtools@2.0.0--canary.545.676b219.0 yarn add @friggframework/e2e@2.0.0--canary.545.676b219.0 yarn add @friggframework/eslint-config@2.0.0--canary.545.676b219.0 yarn add @friggframework/prettier-config@2.0.0--canary.545.676b219.0 yarn add @friggframework/provider-aws@2.0.0--canary.545.676b219.0 yarn add @friggframework/provider-netlify@2.0.0--canary.545.676b219.0 yarn add @friggframework/schemas@2.0.0--canary.545.676b219.0 yarn add @friggframework/serverless-plugin@2.0.0--canary.545.676b219.0 yarn add @friggframework/test@2.0.0--canary.545.676b219.0 yarn add @friggframework/ui@2.0.0--canary.545.676b219.0Note
High Risk
High risk because this introduces breaking API/initialization changes (explicit adapter injection) and restructures core/provider dependencies, which can cause runtime failures if consumers miss wiring or rely on removed mongoose/AWS imports.
Overview
Adds formal multi-provider architecture documentation via new ADRs, including a breaking plan to move AWS SDK usage out of
@friggframework/coreand a standard provider plugin interface.Implements the decoupling by updating core components (notably
Worker) to be provider-agnostic and require injected adapters (e.g.,queueClient) instead of importing AWS SDKs directly, and updates tests to use injected mocks rather than AWS SDK mocks.Cleans up core dependencies by removing
mongoose-specific assertion/model code, replacinglodash.getwith an internaldeepGet, and adjusting repository factories to lazilyrequire()DB-specific adapters only when selected. Monorepo/workspace config is updated to includepackages/providers/*, adds new@friggframework/provider-awsand@friggframework/provider-netlifypackages, and refreshes lockfile dependencies (including AWS SDK version bumps andsupertestupgrade).Written by Cursor Bugbot for commit dba001a. This will update automatically on new commits. Configure here.