upgraedd commited on
Commit
edad1c4
·
verified ·
1 Parent(s): 9ae7354

Create n8n_int_spec

Browse files
Files changed (1) hide show
  1. n8n_int_spec +560 -0
n8n_int_spec ADDED
@@ -0,0 +1,560 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ EPISTEMIC THREAT MODEL & VALIDATOR ONTOLOGY
2
+
3
+ FORMAL THREAT MODEL (STRIDE-E: Epistemic Extension)
4
+
5
+ Spoofing - Identity Subversion
6
+
7
+ ```
8
+ Threat: n8n workflow impersonates validator
9
+ Impact: False authority injection into ledger
10
+ Mitigation:
11
+ 1. Cryptographic validator identity (PKI-based)
12
+ 2. Validator role attestation signed by IRE
13
+ 3. Workflow-to-validator binding in ledger metadata
14
+
15
+ Detection: Mismatch between workflow signature and validator claim
16
+ Severity: Critical (sovereignty breach)
17
+ ```
18
+
19
+ Tampering - Evidence Manipulation
20
+
21
+ ```
22
+ Threat: n8n alters evidence pre-canonicalization
23
+ Impact: Garbage-in, narrative-out
24
+ Mitigation:
25
+ 1. Raw evidence fingerprinting (content hash before processing)
26
+ 2. Evidence lineage tracking in n8n workflow logs
27
+ 3. IRE detects fingerprint mismatches
28
+
29
+ Detection: Pre-canonical hash ≠ post-canonical derivation
30
+ Severity: Critical (truth contamination)
31
+ ```
32
+
33
+ Repudiation - Epistemic Deniability
34
+
35
+ ```
36
+ Threat: n8n denies triggering detection that found suppression
37
+ Impact: System loses accountability for its own findings
38
+ Mitigation:
39
+ 1. Non-repudiable workflow execution proofs
40
+ 2. Watermarked intermediate results
41
+ 3. Cross-referenced timestamp chains
42
+
43
+ Detection: Missing execution proof for detection result
44
+ Severity: High (accountability loss)
45
+ ```
46
+
47
+ Information Disclosure - Pattern Leakage
48
+
49
+ ```
50
+ Threat: Detection patterns leaked through n8n logs
51
+ Impact: Adversaries learn system's detection thresholds
52
+ Mitigation:
53
+ 1. Threshold abstraction (IRE returns categories, not scores)
54
+ 2. Differential privacy on aggregated results
55
+ 3. Ephemeral detection sessions
56
+
57
+ Detection: Raw thresholds appear in orchestration logs
58
+ Severity: Medium (detection model compromise)
59
+ ```
60
+
61
+ Denial of Service - Epistemic Exhaustion
62
+
63
+ ```
64
+ Threat: Flood system with nonsense to waste detection capacity
65
+ Impact: Real patterns missed due to noise saturation
66
+ Mitigation:
67
+ 1. Epistemic rate limiting per source
68
+ 2. Credibility-based throttling
69
+ 3. Detection priority queuing
70
+
71
+ Detection: High-volume, low-signal detection requests
72
+ Severity: Medium (resource exhaustion)
73
+ ```
74
+
75
+ Elevation of Privilege - Ontology Hijacking
76
+
77
+ ```
78
+ Threat: n8n attempts to modify lens/method definitions
79
+ Impact: Epistemic framework compromise
80
+ Mitigation:
81
+ 1. Immutable ontology registry (signed by originators)
82
+ 2. Versioned ontology with migration proofs
83
+ 3. No runtime ontology modification API
84
+
85
+ Detection: Attempt to modify lens/method definitions
86
+ Severity: Critical (sovereignty destruction)
87
+ ```
88
+
89
+ Epistemic Drift - Gradual Corruption
90
+
91
+ ```
92
+ Threat: Slow, subtle contamination of detection patterns
93
+ Impact: System gradually aligns with preferred narrative
94
+ Mitigation:
95
+ 1. Drift detection via historical self-comparison
96
+ 2. Cross-validation with frozen model versions
97
+ 3. Epistemic checksums on detection algorithms
98
+
99
+ Detection: Statistical divergence from historical baseline
100
+ Severity: High (stealth corruption)
101
+ ```
102
+
103
+ VALIDATOR ONTOLOGY (Role-Based Authority)
104
+
105
+ Validator Archetypes
106
+
107
+ ```
108
+ 1. Human-Sovereign Validator
109
+ Role: Individual sovereignty preservation
110
+ Authority: Can veto any ledger commit
111
+ Identity: Self-sovereign cryptographic identity
112
+ Quorum: Always required (cannot be automated away)
113
+ Attestation: "I have reviewed and assert my sovereignty"
114
+
115
+ 2. System-Epistemic Validator
116
+ Role: Detection methodology integrity
117
+ Authority: Validates detection process adherence
118
+ Identity: Cryptographic hash of detection pipeline config
119
+ Quorum: Required for automated commits
120
+ Attestation: "Detection executed per published methodology"
121
+
122
+ 3. Source-Provenance Validator
123
+ Role: Evidence chain custody
124
+ Authority: Validates evidence hasn't been manipulated
125
+ Identity: Hash of evidence handling workflow
126
+ Quorum: Optional but recommended
127
+ Attestation: "Evidence chain intact from source to canonicalization"
128
+
129
+ 4. Temporal-Integrity Validator
130
+ Role: Time-bound execution verification
131
+ Authority: Validates timestamps and execution windows
132
+ Identity: Time-locked cryptographic proof
133
+ Quorum: Required for time-sensitive commits
134
+ Attestation: "Execution occurred within valid time window"
135
+
136
+ 5. Community-Plurality Validator
137
+ Role: Cross-interpreter consensus
138
+ Authority: Requires multiple human interpretations
139
+ Identity: Set of interpreter identities + attestation
140
+ Quorum: Variable based on interpretation count
141
+ Attestation: "Multiple independent interpretations concur"
142
+ ```
143
+
144
+ Validator Configuration Schema
145
+
146
+ ```json
147
+ {
148
+ "validator_id": "urn:ire:validator:human_sovereign:sha256-abc123",
149
+ "archetype": "human_sovereign",
150
+ "authority_scope": ["ledger_commit", "evidence_rejection"],
151
+ "quorum_requirements": {
152
+ "minimum": 1,
153
+ "maximum": null,
154
+ "exclusivity": ["system_epistemic"]
155
+ },
156
+ "attestation_format": {
157
+ "required_fields": ["review_timestamp", "sovereignty_assertion"],
158
+ "signature_algorithm": "ed25519",
159
+ "expiration": "24h"
160
+ },
161
+ "epistemic_constraints": {
162
+ "cannot_override": ["detection_results", "canonical_hash"],
163
+ "can_reject_for": ["procedural_violation", "sovereignty_concern"]
164
+ }
165
+ }
166
+ ```
167
+
168
+ Validator Quorum Calculus
169
+
170
+ ```python
171
+ def calculate_quorum_satisfaction(validators: List[Validator], commit_type: str) -> bool:
172
+ """Calculate if validator quorum is satisfied for commit type"""
173
+
174
+ archetype_counts = Counter(v.archetype for v in validators)
175
+
176
+ # Base requirements
177
+ requirements = {
178
+ "ledger_commit": {
179
+ "human_sovereign": 1,
180
+ "system_epistemic": 1,
181
+ "source_provenance": 0, # optional
182
+ "temporal_integrity": 1,
183
+ "community_plurality": 0 # depends on interpretation count
184
+ },
185
+ "evidence_ingestion": {
186
+ "human_sovereign": 0,
187
+ "system_epistemic": 1,
188
+ "source_provenance": 1,
189
+ "temporal_integrity": 1,
190
+ "community_plurality": 0
191
+ },
192
+ "detection_escalation": {
193
+ "human_sovereign": 1,
194
+ "system_epistemic": 1,
195
+ "source_provenance": 0,
196
+ "temporal_integrity": 1,
197
+ "community_plurality": 1 # required for high-stakes escalations
198
+ }
199
+ }
200
+
201
+ req = requirements[commit_type]
202
+
203
+ # Check each archetype requirement
204
+ for archetype, required_count in req.items():
205
+ if archetype_counts.get(archetype, 0) < required_count:
206
+ return False
207
+
208
+ # Check exclusivity constraints
209
+ for validator in validators:
210
+ for exclusive_archetype in validator.quorum_requirements.get("exclusivity", []):
211
+ if exclusive_archetype in archetype_counts:
212
+ return False
213
+
214
+ return True
215
+ ```
216
+
217
+ LEDGER SCHEMA HARDENING
218
+
219
+ Extended Block Schema
220
+
221
+ ```json
222
+ {
223
+ "block": {
224
+ "header": {
225
+ "id": "blk_timestamp_hash",
226
+ "prev": "previous_block_hash",
227
+ "timestamp": "ISO8601_with_nanoseconds",
228
+ "epistemic_epoch": 1,
229
+ "ontology_version": "sha256:ontology_hash"
230
+ },
231
+ "body": {
232
+ "nodes": [RealityNode],
233
+ "detection_context": {
234
+ "workflow_hash": "sha256:n8n_workflow_def",
235
+ "execution_window": {
236
+ "not_before": "timestamp",
237
+ "not_after": "timestamp",
238
+ "time_proof": "signature_from_temporal_validator"
239
+ },
240
+ "threshold_used": "abstract_category_not_numeric"
241
+ }
242
+ },
243
+ "validations": {
244
+ "attestations": [
245
+ {
246
+ "validator_id": "urn:ire:validator:...",
247
+ "archetype": "human_sovereign",
248
+ "attestation": "cryptographic_signature",
249
+ "scope": ["ledger_commit"],
250
+ "expires": "timestamp"
251
+ }
252
+ ],
253
+ "quorum_satisfied": true,
254
+ "quorum_calc": {
255
+ "required": {"human_sovereign": 1, "system_epistemic": 1},
256
+ "present": {"human_sovereign": 1, "system_epistemic": 1}
257
+ }
258
+ },
259
+ "integrity_marks": {
260
+ "evidence_fingerprint": "sha256_of_raw_content",
261
+ "detection_watermark": "nonce_based_on_workflow_id",
262
+ "epistemic_checksum": "hash_of_detection_logic_version"
263
+ }
264
+ }
265
+ }
266
+ ```
267
+
268
+ Detection Threshold Abstraction Layer
269
+
270
+ ```python
271
+ class EpistemicThresholdInterface:
272
+ """Abstract threshold interface - n8n never sees numeric thresholds"""
273
+
274
+ def __init__(self, ire_client):
275
+ self.ire = ire_client
276
+
277
+ def should_escalate(self, detection_results: Dict) -> Dict:
278
+ """IRE decides escalation, returns abstract category"""
279
+ response = self.ire.post("/ire/detect/evaluate", {
280
+ "results": detection_results,
281
+ "return_format": "abstract_category"
282
+ })
283
+
284
+ return {
285
+ "escalation_recommended": response.get("category") == "high_confidence",
286
+ "confidence_level": response.get("confidence_label"), # "high"/"medium"/"low"
287
+ "next_action": response.get("recommended_action"),
288
+ # NO NUMERIC THRESHOLDS EXPOSED
289
+ # NO RAW SCORES EXPOSED
290
+ }
291
+
292
+ def get_validation_requirements(self, category: str) -> Dict:
293
+ """Map escalation category to validator requirements"""
294
+ mapping = {
295
+ "high_confidence": {
296
+ "human_sovereign": 2,
297
+ "system_epistemic": 1,
298
+ "community_plurality": 1
299
+ },
300
+ "medium_confidence": {
301
+ "human_sovereign": 1,
302
+ "system_epistemic": 1
303
+ },
304
+ "low_confidence": {
305
+ "system_epistemic": 1
306
+ }
307
+ }
308
+ return mapping.get(category, {})
309
+ ```
310
+
311
+ Time-Window Enforcement
312
+
313
+ ```python
314
+ class TemporalIntegrityEnforcer:
315
+ """Enforce time-bound execution with cryptographic proofs"""
316
+
317
+ def create_execution_window(self, duration_hours: int = 24) -> Dict:
318
+ """Create cryptographically bound execution window"""
319
+ window_id = f"window_{uuid.uuid4()}"
320
+ not_before = datetime.utcnow()
321
+ not_after = not_before + timedelta(hours=duration_hours)
322
+
323
+ # Create time-locked proof
324
+ window_proof = {
325
+ "window_id": window_id,
326
+ "not_before": not_before.isoformat() + "Z",
327
+ "not_after": not_after.isoformat() + "Z",
328
+ "issuer": "ire_temporal_validator",
329
+ "signature": self._sign_temporal_window(window_id, not_before, not_after)
330
+ }
331
+
332
+ return window_proof
333
+
334
+ def validate_within_window(self,
335
+ action_timestamp: str,
336
+ window_proof: Dict) -> bool:
337
+ """Validate action occurred within execution window"""
338
+ # Verify signature
339
+ if not self._verify_signature(window_proof):
340
+ return False
341
+
342
+ # Parse timestamps
343
+ action_time = datetime.fromisoformat(action_timestamp.replace('Z', '+00:00'))
344
+ not_before = datetime.fromisoformat(window_proof['not_before'].replace('Z', '+00:00'))
345
+ not_after = datetime.fromisoformat(window_proof['not_after'].replace('Z', '+00:00'))
346
+
347
+ # Check bounds
348
+ return not_before <= action_time <= not_after
349
+
350
+ def detect_time_anomalies(self, workflow_executions: List[Dict]) -> List[Dict]:
351
+ """Detect temporal manipulation patterns"""
352
+ anomalies = []
353
+
354
+ for i, execution in enumerate(workflow_executions):
355
+ # Check for reverse time flow
356
+ if i > 0:
357
+ prev_time = datetime.fromisoformat(
358
+ workflow_executions[i-1]['timestamp'].replace('Z', '+00:00')
359
+ )
360
+ curr_time = datetime.fromisoformat(
361
+ execution['timestamp'].replace('Z', '+00:00')
362
+ )
363
+ if curr_time < prev_time:
364
+ anomalies.append({
365
+ "type": "reverse_time_flow",
366
+ "execution_id": execution['id'],
367
+ "anomaly": f"Time went backwards: {curr_time} < {prev_time}"
368
+ })
369
+
370
+ # Check execution duration anomalies
371
+ if 'duration' in execution:
372
+ expected_duration = self._get_expected_duration(execution['workflow_type'])
373
+ if execution['duration'] > expected_duration * 2:
374
+ anomalies.append({
375
+ "type": "suspicious_duration",
376
+ "execution_id": execution['id'],
377
+ "anomaly": f"Duration {execution['duration']} exceeds expected {expected_duration}"
378
+ })
379
+
380
+ return anomalies
381
+ ```
382
+
383
+ Semantic Laundering Defense
384
+
385
+ ```python
386
+ class EpistemicIntegrityGuard:
387
+ """Defend against semantic laundering attacks"""
388
+
389
+ def __init__(self):
390
+ self.similarity_clusters = defaultdict(list)
391
+ self.source_frequency = defaultdict(int)
392
+
393
+ def check_semantic_laundering(self,
394
+ content_hash: str,
395
+ raw_content: str,
396
+ source_id: str,
397
+ workflow_id: str) -> Dict:
398
+ """Check for semantic laundering patterns"""
399
+
400
+ # Check source frequency
401
+ self.source_frequency[source_id] += 1
402
+ if self.source_frequency[source_id] > 100: # Threshold
403
+ return {
404
+ "risk": "high",
405
+ "reason": "Excessive submissions from single source",
406
+ "action": "throttle"
407
+ }
408
+
409
+ # Check similarity clusters
410
+ content_vector = self._vectorize(raw_content)
411
+ similar = self._find_similar(content_vector)
412
+
413
+ if similar:
414
+ cluster_id = similar[0]['cluster_id']
415
+ self.similarity_clusters[cluster_id].append({
416
+ "content_hash": content_hash,
417
+ "timestamp": datetime.utcnow().isoformat(),
418
+ "workflow_id": workflow_id
419
+ })
420
+
421
+ # Check cluster growth rate
422
+ if len(self.similarity_clusters[cluster_id]) > 10:
423
+ return {
424
+ "risk": "medium",
425
+ "reason": "Rapid growth of similar content cluster",
426
+ "action": "flag_for_review"
427
+ }
428
+
429
+ return {"risk": "low", "reason": "No laundering patterns detected"}
430
+
431
+ def _vectorize(self, content: str) -> List[float]:
432
+ """Create semantic vector (simplified - use real embeddings in production)"""
433
+ # Simple bag-of-words for demonstration
434
+ words = content.lower().split()
435
+ word_counts = Counter(words)
436
+ vector = [word_counts.get(w, 0) for w in self.vocabulary]
437
+ return vector
438
+
439
+ def _find_similar(self, vector: List[float], threshold: float = 0.8):
440
+ """Find similar vectors in existing clusters"""
441
+ # Simplified similarity search
442
+ for cluster_id, items in self.similarity_clusters.items():
443
+ # In production, use proper vector similarity
444
+ if random.random() > 0.5: # Placeholder
445
+ return items
446
+ return None
447
+ ```
448
+
449
+ IMPLEMENTATION ROADMAP
450
+
451
+ Phase 1: Core Sovereignty (Weeks 1-2)
452
+
453
+ 1. Implement validator PKI and attestation framework
454
+ 2. Deploy threshold abstraction layer
455
+ 3. Add time-window enforcement
456
+ 4. Basic semantic laundering detection
457
+
458
+ Phase 2: Epistemic Defense (Weeks 3-4)
459
+
460
+ 1. Full STRIDE-E threat model implementation
461
+ 2. Validator quorum calculus integration
462
+ 3. Ledger schema hardening
463
+ 4. Cross-validation with frozen models
464
+
465
+ Phase 3: Operational Resilience (Weeks 5-6)
466
+
467
+ 1. Drift detection and alerting
468
+ 2. Validator role rotation policies
469
+ 3. Recovery procedures for compromise
470
+ 4. Full audit trail integration
471
+
472
+ Phase 4: Community Governance (Weeks 7-8)
473
+
474
+ 1. Validator reputation system
475
+ 2. Plurality-based decision frameworks
476
+ 3. Cross-interpreter reconciliation
477
+ 4. Sovereign identity integration
478
+
479
+ VERIFICATION PROTOCOL ENHANCEMENT
480
+
481
+ Daily Sovereignty Check
482
+
483
+ ```python
484
+ def daily_sovereignty_audit():
485
+ """Daily audit to ensure no boundary violations"""
486
+
487
+ checks = [
488
+ # 1. Check n8n for epistemic logic
489
+ scan_n8n_workflows_for_detection_logic(),
490
+
491
+ # 2. Check IRE for orchestration logic
492
+ scan_ire_for_scheduling_logic(),
493
+
494
+ # 3. Verify threshold abstraction
495
+ verify_no_numeric_thresholds_in_n8n(),
496
+
497
+ # 4. Validate time-window adherence
498
+ verify_all_executions_within_windows(),
499
+
500
+ # 5. Check validator quorum compliance
501
+ verify_all_commits_have_proper_quorum(),
502
+
503
+ # 6. Detect semantic laundering
504
+ run_semantic_laundering_detection(),
505
+
506
+ # 7. Verify workflow definition integrity
507
+ hash_and_verify_workflow_definitions(),
508
+
509
+ # 8. Check for drift
510
+ compare_with_frozen_model_baseline()
511
+ ]
512
+
513
+ sovereignty_score = sum(1 for check in checks if check.passed) / len(checks)
514
+
515
+ return {
516
+ "sovereignty_score": sovereignty_score,
517
+ "failed_checks": [c for c in checks if not c.passed],
518
+ "recommendations": generate_remediation_plan(failed_checks)
519
+ }
520
+ ```
521
+
522
+ Weekly Epistemic Integrity Report
523
+
524
+ ```python
525
+ def weekly_epistemic_integrity_report():
526
+ """Weekly comprehensive epistemic health report"""
527
+
528
+ report = {
529
+ "temporal_integrity": {
530
+ "execution_windows_violated": count_window_violations(),
531
+ "time_anomalies_detected": detect_time_anomalies(),
532
+ "average_execution_latency": calculate_latency()
533
+ },
534
+ "validator_health": {
535
+ "active_validators": count_active_validators(),
536
+ "quorum_satisfaction_rate": calculate_quorum_rate(),
537
+ "validator_role_diversity": measure_diversity()
538
+ },
539
+ "detection_quality": {
540
+ "drift_from_baseline": measure_drift(),
541
+ "false_positive_rate": calculate_fpr(),
542
+ "escalation_accuracy": measure_escalation_accuracy()
543
+ },
544
+ "boundary_integrity": {
545
+ "n8n_epistemic_contamination": detect_contamination(),
546
+ "ire_orchestration_leakage": detect_leakage(),
547
+ "workflow_definition_changes": track_workflow_changes()
548
+ },
549
+ "semantic_defenses": {
550
+ "laundering_attempts": count_laundering_attempts(),
551
+ "similarity_clusters": analyze_clusters(),
552
+ "source_credibility": assess_source_credibility()
553
+ }
554
+ }
555
+
556
+ # Calculate overall epistemic health score
557
+ report["epistemic_health_score"] = calculate_health_score(report)
558
+
559
+ return report
560
+ ```