TidGi-Desktop/features/stepDefinitions/agent.ts
lin onetwo 9a6f3480f5
Feat/watch fs (#649)
* Add watch-filesystem-adaptor plugin and worker IPC

Introduces the watch-filesystem-adaptor TiddlyWiki plugin, enabling tag-based routing of tiddlers to sub-wikis by querying workspace info via worker thread IPC. Adds workerServiceCaller utility for worker-to-main service calls, updates workerAdapter and bindServiceAndProxy to support explicit service registration for workers, and documents the new IPC architecture. Updates wikiWorker and startNodeJSWiki to preload workspace ID and load the new plugin. Also updates the plugin build script to compile and copy the new plugin.

* test: wiki operation steps

* Add per-wiki labeled logging and console hijack

Introduces labeled loggers for each wiki, writing logs to separate files. Adds a logFor method to NativeService for logging with labels, updates interfaces, and hijacks worker thread console methods to redirect logs to main process for wiki-specific logging. Refactors workspaceID usage to workspace object for improved context.

* Update log handling for wiki worker and tests

Enhanced logging tests to check all log files, including wiki logs. Adjusted logger to write wiki worker logs to the main log directory. Updated e2e app script comment for correct usage.

* Enable worker thread access to main process services

Introduces a proxy system allowing worker threads to call main process services with full type safety and observable support. Adds worker-side service proxy creation, auto-attaches proxies to global.service, and updates service registration to use IPC descriptors. Documentation is added for usage and architecture.

* Update ErrorDuringStart.md

* chore: upgrade ipc cat and allow clean vite cache

* Refactor wiki worker initialization and service readiness

Moved wiki worker implementation from wikiWorker.ts to wikiWorker/index.ts and deleted the old file. Added servicesReady.ts to manage worker service readiness and callbacks, and integrated notifyServicesReady into the worker lifecycle. Updated console hijack logic to wait for service readiness before hijacking. Improved worker management in Wiki service to support detaching workers and notifying readiness.

* Refactor wiki logging to use centralized logger

Removed per-wiki loggers and console hijacking in favor of a single labeled logger. All wiki logs, including errors, are now written to a unified log file. Updated worker and service code to route logs through the main logger and removed obsolete log file naming and management logic.

* fix: ipc cat log error

* Refactor wiki test paths and improve file save logic

Updated test step to use wikiTestRootPath for directory replacements and added wikiTestRootPath to paths.ts for clarity. Improved error handling and directory logic in watch-filesystem-adaptor.ts, including saving tiddlers directly to sub-wiki folders, more informative logging, and ensuring cleanup after file writes is properly awaited.

* rename

* Initial commit when init a new git.

* Initial commit when init a new git.

* Initial commit when init a new git.

* Initial commit when init a new git.

* Initial commit when init a new git.

* Initial commit when init a new git.

* feat: basic watch-fs

* feat: check file not exist

* refactor: use exponential-backoff

* Initial commit when init a new git.

* Initial commit when init a new git.

* Initial commit when init a new git.

* Initial commit when init a new git.

* Initial commit when init a new git.

* Initial commit when init a new git.

* Initial commit when init a new git.

* Initial commit when init a new git.

* Initial commit when init a new git.

* Initial commit when init a new git.

* Initial commit when init a new git.

* Initial commit when init a new git.

* fix: cleanup

* Refactor test setup and cleanup to separate file

Moved Before and After hooks from application.ts to a new cleanup.ts file for better organization and separation of concerns. Also removed unused imports and related code from application.ts. Minor type simplification in agent.ts for row parsing.

* test: modify and rename

* feat: enableFileSystemWatch

* refactor: unused utils.ts

* Update watch-filesystem-adaptor.ts

* refactor: use node-sentinel-file-watcher

* refactor: extract to two classes

* The logFor method lacks JSDoc describing the level parameter's

* Update startNodeJSWiki.ts

* fix: napi build

* Update electron-rebuild command in workflows

Changed the electron-rebuild command in release and test GitHub Actions workflows to use a comma-separated list for native modules instead of multiple -w flags. This simplifies the rebuild step for better-sqlite3 and nsfw modules.

* lint

* not build nsfw, try use prebuild

* Update package.json

* Update workerAdapter.ts

* remove subWikiPlugin.ts as we use new filesystem adaptor that supports tag based sub wiki

* fix: build

* fix: wrong type

* lint

* remove `act(...)` warnings

* uninstall chokidar

* refactor and test

* lint

* remove unused logic, as we already use ipc syncadaptor, not afriad of wiki status change

* Update translation.json

* test: increast timeout in CI

* Update application.ts

* fix: AI's wrong cleanup logic hidden by as unknown as

* fix: AI's wrong  as unknown as

* Update agent.feature

* Update wikiSearchPlugin.ts

* fix: A dynamic import callback was not specified.
2025-10-28 13:25:46 +08:00

302 lines
11 KiB
TypeScript

import { After, DataTable, Given, Then } from '@cucumber/cucumber';
import { AIGlobalSettings, AIProviderConfig } from '@services/externalAPI/interface';
import { backOff } from 'exponential-backoff';
import fs from 'fs-extra';
import { isEqual, omit } from 'lodash';
import path from 'path';
import type { ISettingFile } from '../../src/services/database/interface';
import { MockOpenAIServer } from '../supports/mockOpenAI';
import { settingsPath } from '../supports/paths';
import type { ApplicationWorld } from './application';
// Backoff configuration for retries
const BACKOFF_OPTIONS = {
numOfAttempts: 10,
startingDelay: 200,
timeMultiple: 1.5,
};
/**
* Generate deterministic embedding vector based on a semantic tag
* This allows us to control similarity in tests without writing full 384-dim vectors
*
* Strategy:
* - Similar tags (note1, note1-similar) -> similar vectors (high similarity)
* - Different tags (note1, note2) -> different vectors (medium similarity)
* - Unrelated tags (note1, unrelated) -> very different vectors (low similarity)
*/
function generateSemanticEmbedding(tag: string): number[] {
const vector: number[] = [];
// Parse tag to determine semantic relationship
// Format: "note1", "note2", "query-note1", "unrelated"
const baseTag = tag.replace(/-similar$/, '').replace(/^query-/, '');
const isSimilar = tag.includes('-similar');
const isQuery = tag.startsWith('query-');
const isUnrelated = tag === 'unrelated';
// Generate base vector from tag
const seed = Array.from(baseTag).reduce((hash, char) => {
return ((hash << 5) - hash) + char.charCodeAt(0);
}, 0);
for (let dimension = 0; dimension < 384; dimension++) {
const x = Math.sin((seed + dimension) * 0.1) * 10000;
let value = x - Math.floor(x);
// Adjust vector based on semantic relationship
if (isUnrelated) {
// Completely different direction
value = -value;
} else if (isSimilar || isQuery) {
// Very similar (>95% similarity) - add small noise
value = value + (Math.sin(dimension * 0.01) * 0.05);
}
// Normalize to [-1, 1]
vector.push(value * 2 - 1);
}
return vector;
}
// Agent-specific Given steps
Given('I have started the mock OpenAI server', function(this: ApplicationWorld, dataTable: DataTable | undefined, done: (error?: Error) => void) {
try {
const rules: Array<{ response: string; stream?: boolean; embedding?: number[] }> = [];
if (dataTable && typeof dataTable.raw === 'function') {
const rows = dataTable.raw();
// Skip header row
for (let index = 1; index < rows.length; index++) {
const row = rows[index];
const response = (row[0] ?? '').trim();
const stream = (row[1] ?? '').trim().toLowerCase() === 'true';
const embeddingTag = (row[2] ?? '').trim();
// Generate embedding from semantic tag if provided
let embedding: number[] | undefined;
if (embeddingTag) {
embedding = generateSemanticEmbedding(embeddingTag);
}
if (response) rules.push({ response, stream, embedding });
}
}
this.mockOpenAIServer = new MockOpenAIServer(15121, rules);
this.mockOpenAIServer.start().then(() => {
done();
}).catch((error_: unknown) => {
done(error_ as Error);
});
} catch (error) {
done(error as Error);
}
});
// Mock OpenAI server cleanup - for scenarios using mock OpenAI
After({ tags: '@mockOpenAI' }, async function(this: ApplicationWorld) {
// Stop mock OpenAI server with timeout protection
if (this.mockOpenAIServer) {
try {
await Promise.race([
this.mockOpenAIServer.stop(),
new Promise<void>((resolve) => setTimeout(resolve, 2000)),
]);
} catch {
// Ignore errors during cleanup
} finally {
this.mockOpenAIServer = undefined;
}
}
});
// Only keep agent-specific steps that can't use generic ones
Then('I should see {int} messages in chat history', async function(this: ApplicationWorld, expectedCount: number) {
const currentWindow = this.currentWindow || this.mainWindow;
if (!currentWindow) {
throw new Error('No current window is available');
}
const messageSelector = '[data-testid="message-bubble"]';
await backOff(
async () => {
// Wait for at least one message to exist
await currentWindow.waitForSelector(messageSelector, { timeout: 5000 });
// Count current messages
const messages = currentWindow.locator(messageSelector);
const currentCount = await messages.count();
if (currentCount === expectedCount) {
return; // Success
} else if (currentCount > expectedCount) {
throw new Error(`Expected ${expectedCount} messages but found ${currentCount} (too many)`);
} else {
// Not enough messages yet, throw to trigger retry
throw new Error(`Expected ${expectedCount} messages but found ${currentCount}`);
}
},
BACKOFF_OPTIONS,
).catch(async (error: unknown) => {
// Get final count for error message
try {
const finalCount = await currentWindow.locator(messageSelector).count();
throw new Error(`Could not find expected ${expectedCount} messages. Found ${finalCount}. Error: ${(error as Error).message}`);
} catch {
throw new Error(`Could not find expected ${expectedCount} messages. Error: ${(error as Error).message}`);
}
});
});
Then('the last AI request should contain system prompt {string}', async function(this: ApplicationWorld, expectedPrompt: string) {
if (!this.mockOpenAIServer) {
throw new Error('Mock OpenAI server is not running');
}
const lastRequest = this.mockOpenAIServer.getLastRequest();
if (!lastRequest) {
throw new Error('No AI request has been made yet');
}
// Find system message in the request
const systemMessage = lastRequest.messages.find(message => message.role === 'system');
if (!systemMessage) {
throw new Error('No system message found in the AI request');
}
if (!systemMessage.content || !systemMessage.content.includes(expectedPrompt)) {
throw new Error(`Expected system prompt to contain "${expectedPrompt}", but got: "${systemMessage.content}"`);
}
});
Then('the last AI request should have {int} messages', async function(this: ApplicationWorld, expectedCount: number) {
if (!this.mockOpenAIServer) {
throw new Error('Mock OpenAI server is not running');
}
const lastRequest = this.mockOpenAIServer.getLastRequest();
if (!lastRequest) {
throw new Error('No AI request has been made yet');
}
const actualCount = lastRequest.messages.length;
if (actualCount !== expectedCount) {
throw new Error(`Expected ${expectedCount} messages in the AI request, but got ${actualCount}`);
}
});
// Shared provider config used across steps (kept at module scope for reuse)
const providerConfig: AIProviderConfig = {
provider: 'TestProvider',
baseURL: 'http://127.0.0.1:15121/v1',
models: [
{ name: 'test-model', features: ['language'] },
{ name: 'test-embedding-model', features: ['language', 'embedding'] },
{ name: 'test-speech-model', features: ['speech'] },
],
providerClass: 'openAICompatible',
isPreset: false,
enabled: true,
};
const desiredModelParameters = { temperature: 0.7, systemPrompt: 'You are a helpful assistant.', topP: 0.95 };
Given('I ensure test ai settings exists', function() {
// Build expected aiSettings from shared providerConfig and compare with actual
const modelsArray = providerConfig.models;
const modelName = modelsArray[0]?.name;
const providerName = providerConfig.provider;
const parsed = fs.readJsonSync(settingsPath) as Record<string, unknown>;
const actual = (parsed.aiSettings as Record<string, unknown> | undefined) || null;
if (!actual) {
throw new Error('aiSettings not found in settings file');
}
const actualProviders = (actual.providers as Array<Record<string, unknown>>) || [];
// Check TestProvider exists
const testProvider = actualProviders.find(p => p.provider === providerName);
if (!testProvider) {
console.error('TestProvider not found in actual providers:', JSON.stringify(actualProviders, null, 2));
throw new Error('TestProvider not found in aiSettings');
}
// Verify TestProvider configuration
if (!isEqual(testProvider, providerConfig)) {
console.error('TestProvider config mismatch. expected:', JSON.stringify(providerConfig, null, 2));
console.error('TestProvider config actual:', JSON.stringify(testProvider, null, 2));
throw new Error('TestProvider configuration does not match expected');
}
// Check ComfyUI provider exists
const comfyuiProvider = actualProviders.find(p => p.provider === 'comfyui');
if (!comfyuiProvider) {
console.error('ComfyUI provider not found in actual providers:', JSON.stringify(actualProviders, null, 2));
throw new Error('ComfyUI provider not found in aiSettings');
}
// Verify ComfyUI has test-flux model with workflow path
const comfyuiModels = (comfyuiProvider.models as Array<Record<string, unknown>>) || [];
const testFluxModel = comfyuiModels.find(m => m.name === 'test-flux');
if (!testFluxModel) {
console.error('test-flux model not found in ComfyUI models:', JSON.stringify(comfyuiModels, null, 2));
throw new Error('test-flux model not found in ComfyUI provider');
}
// Verify workflow path
const parameters = testFluxModel.parameters as Record<string, unknown> | undefined;
if (!parameters || parameters.workflowPath !== 'C:/test/mock/workflow.json') {
console.error('Workflow path mismatch. expected: C:/test/mock/workflow.json, actual:', parameters?.workflowPath);
throw new Error('Workflow path not correctly saved');
}
// Verify default config
const defaultConfig = actual.defaultConfig as Record<string, unknown>;
const api = defaultConfig.api as Record<string, unknown>;
if (api.provider !== providerName || api.model !== modelName) {
console.error('Default config mismatch. expected provider:', providerName, 'model:', modelName);
console.error('actual api:', JSON.stringify(api, null, 2));
throw new Error('Default configuration does not match expected');
}
});
Given('I add test ai settings', function() {
let existing = {} as ISettingFile;
if (fs.existsSync(settingsPath)) {
existing = fs.readJsonSync(settingsPath) as ISettingFile;
} else {
// ensure settings directory exists so writeJsonSync won't fail
fs.ensureDirSync(path.dirname(settingsPath));
}
const modelsArray = providerConfig.models;
const modelName = modelsArray[0]?.name;
const embeddingModelName = modelsArray[1]?.name;
const speechModelName = modelsArray[2]?.name;
const newAi: AIGlobalSettings = {
providers: [providerConfig],
defaultConfig: {
api: {
provider: providerConfig.provider,
model: modelName,
embeddingModel: embeddingModelName,
speechModel: speechModelName,
},
modelParameters: desiredModelParameters,
},
};
fs.writeJsonSync(settingsPath, { ...existing, aiSettings: newAi } as ISettingFile, { spaces: 2 });
});
function clearAISettings() {
if (!fs.existsSync(settingsPath)) return;
const parsed = fs.readJsonSync(settingsPath) as ISettingFile;
const cleaned = omit(parsed, ['aiSettings']);
fs.writeJsonSync(settingsPath, cleaned, { spaces: 2 });
}
export { clearAISettings };