TidGi-Desktop/features/stepDefinitions/agent.ts
lin onetwo 6384fd8bd1
Fix/misc bug (#677)
* fix: not removed

* Optimize tidgi.config.json writes for workspace updates

Update logic to write tidgi.config.json only for the modified workspace instead of all wiki workspaces on each update. This reduces redundant file operations and improves performance during workspace updates.

* Refactor workspace saving and UI update logic

Introduced a private saveWorkspacesToSettings method to centralize logic for saving workspaces and removing syncable fields from wiki workspaces. The set and setWorkspaces methods now support skipping UI updates for batch operations, improving performance. Fixed minor issues in legacy migration and error messages.

* Add 'Ask AI' context menu and wiki embed split view

Introduces an 'Ask AI' option to the wiki context menu, enabling users to send selected text to an agent chat in a split view with the wiki embedded. Implements new tab type WIKI_EMBED, updates tab and channel types, adds localization, manages BrowserView bounds for embedding, and ensures persistence and IPC wiring for the new workflow.

* Update wiki

* electron chrome mcp mode sometimes wont show browser view

Clarified troubleshooting steps in docs/MCP.md regarding browser view issues and updated the instructions. Reordered the 'start:dev:mcp' script in package.json for better organization.

* Add agent selection to 'Talk with AI' context menu

Replaces the 'Ask AI' context menu with 'Talk with AI' and adds a submenu for selecting different agent definitions. Updates translations for all supported languages, modifies the askAIWithSelection channel to support agentDefId, and refactors tab creation logic to support split view with agent selection. Improves robustness in view management by handling case-insensitive workspace IDs and custom bounds logic.

* Add e2e test and refactor 'Talk with AI' split view logic

Introduces a new Cucumber feature for 'Talk with AI' from wiki selection, adds a step definition to trigger the workflow via IPC, and refactors split view tab creation to reuse existing tabs when possible. Updates the agent browser service to support finding or creating the appropriate split view tab, and adjusts menu and view services for improved robustness and code clarity. Also adds test IDs to relevant components for more reliable UI testing.

* Update defaultWiki.feature

* Add config error handling and i18n for agent errors

Introduces a new feature test for configuration error handling, adds step definition to remove AI settings for testing, and updates error message rendering to support new error types. Internationalized error messages and button labels for configuration issues are added in both English and Chinese locales. The error message renderer now uses a data-testid for easier testing and recognizes additional error types as fixable in settings.

* Refactor feature files to use two-column selector tables

Updated all feature files to use a standardized two-column format for selector tables, with explicit 'element description' and 'selector' columns. Step definitions in ui.ts were refactored to support this format, improving readability and maintainability of test steps and error handling.

* Delete tiddlywiki

* test: allow parallel

* test: implement scenario isolation for E2E tests

- Isolate each test scenario in test-artifacts/{scenarioSlug}/ directory
- Use dynamic ports for mock OpenAI server to avoid port conflicts
- Log VIEW_LOADED event via did-finish-load in main process (more reliable)
- Search all .log files when waiting for log markers
- Increase timeout for log marker steps to 15 seconds
- Fix ts-node cache issues by clearing cache before tests
- Move application launch to individual scenarios (required for mock server setup)

All 45 E2E test scenarios now pass consistently.

* refactor: optimize agent.feature by moving common steps to Background

- Add MockOpenAIServer.addRules() method to append responses dynamically
- Add 'I have started the mock OpenAI server without rules' step for Background
- Add 'I add mock OpenAI responses:' step to inject responses per scenario
- Move application launch and navigation to Background (shared by all scenarios)
- Keep scenario-specific mock responses in individual scenarios

This improves test maintainability by reducing duplication while keeping
scenario-specific configuration flexible.

* lint

* Refactor scenario path helpers into shared module

Moved scenario-specific path helper functions from individual step definition files to a centralized 'features/supports/paths.ts' module. Updated imports in step definitions to use the shared helpers, improving code reuse and maintainability. Also enhanced test for ContextService to skip optional runtime keys.

* Refactor slug generation to use shared slugify helper

Introduced a new src/helpers/slugify.ts utility for consistent slug generation across the codebase. Updated appPaths.ts to use the shared slugify function, improving maintainability and ensuring identical behavior for test scenario slugs. Added documentation and clarified slugification rules in relevant files. Minor comments and clarifications were added to E2E and mock server code.

* Enforce strict timeout rules in E2E test steps

Added and clarified critical warnings for AI agents regarding timeout modifications in application, cleanup, and wiki step definitions. All timeouts are now strictly limited to 5s local and 10s CI, with explicit comments and environment-based values. Updated documentation and code comments to reinforce that timeouts indicate real bugs and should not be increased.

* Update features/stepDefinitions/application.ts

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Refactor E2E helpers, improve test reliability and cleanup

Centralizes data table parsing for UI step definitions, refactors mock OpenAI server setup, and improves workspace settings path handling for tests. Adjusts timeouts for window and app closing to better reflect real-world performance. Fixes type usage in workspace ID lookups and adds error handling for resize observer and cleanup in WikiEmbedTabContent. Enhances agent browser tab logic and view service cleanup to prevent memory leaks and catch workspace ID casing issues.

* Update agent.ts

* fix: resolve all E2E test timeout issues

* fix: improve CI test reliability with better timing and cleanup

- Use exponential-backoff library for agent creation retry logic
- Extend agent cancel delay to 1000ms for CI environments
- Fix git log refresh marker timing with queueMicrotask
- Improve cleanup timeout handling with force close strategy

All E2E tests passing locally including previously failing CI tests.

* Improve Git log E2E signal and add debug logging

Renames the test artifact in the CI workflow for clarity. Moves the E2E test timing log in useGitLogData to after entries are rendered, using a more reliable signal. Adds a debug log to notifyGitStateChange for better traceability.

* test-artifacts-ci

* Optimize test artifact handling and Git log logging

Update CI workflow to clean up large cache folders in test artifacts and only upload logs, settings, and screenshots to reduce artifact size. Refactor useGitLogData to log immediately after state updates for improved E2E test reliability, removing unnecessary setTimeout.

* Update useGitLogData.ts

* Improve Git log E2E test logging and .gitignore

Added 'test-artifacts-ci.zip' to .gitignore. Moved the '[test-id-git-log-refreshed]' log to immediately after data load for more reliable E2E test detection, and removed redundant logging from the render effect in useGitLogData.ts.

* Update useGitLogData.ts

* Update useGitLogData.ts

* Fix git log refresh marker not appearing in CI

- Move git-log-refreshed marker before RAF to ensure it's recorded
- RAF callbacks may not execute reliably in headless CI environments
- Add debug logging to track loadGitLog execution
- Add try-catch around log call to catch any errors
- Keep git-log-data-rendered in useEffect for UI tracking

* Update useGitLogData.ts

* Update useGitLogData.ts

* Add comprehensive logging to diagnose git-log-refreshed issue

- Log before RAF and inside RAF to pinpoint exact failure location
- Add try-catch to capture any errors
- Two log markers: before-raf and in-raf
- This will definitively show where the logging fails in CI

* Fix race condition: prevent concurrent loadGitLog calls

Root cause: commit triggers 2 refreshes (gitStateChange$ + handleCommitSuccess)
- First loadGitLog (refreshTrigger=1) succeeds
- Second loadGitLog (refreshTrigger=2) starts but never completes
- Add loadGitLogInProgress guard to prevent concurrent execution
- Log when loadGitLog is skipped due to in-progress call

This ensures git-log-refreshed is always logged after commit.

* Remove redundant triggerRefresh calls causing race condition

- handleCommitSuccess/Revert/Undo no longer call triggerRefresh
- gitStateChange\$ observable already triggers refresh for these operations
- Redundant calls caused 2 concurrent loadGitLog, causing CI test failures
- Local tests passed because both completed; CI failed because 2nd never completed

This ensures only 1 loadGitLog runs per git operation.

* Remove unused triggerRefresh parameter from useCommitSelection

- triggerRefresh no longer used in handlers
- Remove from interface and call site
- Clean up lint errors

* Remove triggerRefresh completely - no longer needed

- Observable subscription handles all git state changes
- Remove function definition and exports
- Fix all lint errors

Root cause resolved: commit triggered double refresh causing race condition.
Now only single refresh via observable.

* Remove fixed time waits from gitLog.feature and fix race condition

- Remove all fixed time wait steps from gitLog.feature (14 instances)
- Remove redundant triggerRefresh calls in handleCommitSuccess/Revert/Undo
- Add loadGitLogInProgress guard to prevent concurrent loadGitLog
- Root cause: commit triggered 2 refreshes causing race condition
- Only gitStateChange\$ observable now triggers refresh
- All 4 gitLog tests pass locally

* Fix clear timing: clear log BEFORE commit, not after

Root cause: test cleared git-log-refreshed AFTER commit completed
- But commit already triggered refresh and logged git-log-refreshed
- Clear deleted it, then test waited for new log that would never come
- Solution: clear BEFORE clicking commit button
- This way commit's git-log-refreshed is the first one after clear

Test now passes locally.

* Update cleanup.ts

* Initial commit when init a new git.

* Refactor feature steps for multi-element and log marker tables

Updated multiple feature files and step definitions to support table-driven steps for clicking and asserting multiple elements, and for waiting for multiple log markers in sequence. This reduces redundant waits, improves test reliability, and streamlines Gherkin syntax for multi-element actions and assertions. Also removed unnecessary manual wait steps where content or element checks now handle waiting automatically.

* Minor code cleanup and formatting improvements

Reordered imports in browserView.ts, fixed whitespace in cleanup.ts and useGitLogData.ts, and improved line formatting in GitLog/index.tsx for better readability and consistency.

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: tidgi <tiddlygit@gmail.com>
2026-01-20 11:11:28 +08:00

569 lines
21 KiB
TypeScript

import { After, DataTable, Given, Then, When } from '@cucumber/cucumber';
import { AIGlobalSettings, AIProviderConfig } from '@services/externalAPI/interface';
import type { IWorkspace } from '@services/workspaces/interface';
import { backOff } from 'exponential-backoff';
import fs from 'fs-extra';
import { isEqual, omit } from 'lodash';
import path from 'path';
import type { ISettingFile } from '../../src/services/database/interface';
import { MockOpenAIServer } from '../supports/mockOpenAI';
import { getSettingsPath } from '../supports/paths';
import type { ApplicationWorld } from './application';
// Backoff configuration for retries
const BACKOFF_OPTIONS = {
numOfAttempts: 10,
startingDelay: 200,
timeMultiple: 1.5,
};
/**
* Generate deterministic embedding vector based on a semantic tag
* This allows us to control similarity in tests without writing full 384-dim vectors
*
* Strategy:
* - Similar tags (note1, note1-similar) -> similar vectors (high similarity)
* - Different tags (note1, note2) -> different vectors (medium similarity)
* - Unrelated tags (note1, unrelated) -> very different vectors (low similarity)
*/
function generateSemanticEmbedding(tag: string): number[] {
const vector: number[] = [];
// Parse tag to determine semantic relationship
// Format: "note1", "note2", "query-note1", "unrelated"
const baseTag = tag.replace(/-similar$/, '').replace(/^query-/, '');
const isSimilar = tag.includes('-similar');
const isQuery = tag.startsWith('query-');
const isUnrelated = tag === 'unrelated';
// Generate base vector from tag
const seed = Array.from(baseTag).reduce((hash, char) => {
return ((hash << 5) - hash) + char.charCodeAt(0);
}, 0);
for (let dimension = 0; dimension < 384; dimension++) {
const x = Math.sin((seed + dimension) * 0.1) * 10000;
let value = x - Math.floor(x);
// Adjust vector based on semantic relationship
if (isUnrelated) {
// Completely different direction
value = -value;
} else if (isSimilar || isQuery) {
// Very similar (>95% similarity) - add small noise
value = value + (Math.sin(dimension * 0.01) * 0.05);
}
// Normalize to [-1, 1]
vector.push(value * 2 - 1);
}
return vector;
}
// Helper function to start mock OpenAI server and update settings
async function startMockOpenAIServerAndUpdateSettings(
world: ApplicationWorld,
rules: Array<{ response: string; stream?: boolean; embedding?: number[] }>,
): Promise<void> {
// Use dynamic port (0) to allow parallel test execution
world.mockOpenAIServer = new MockOpenAIServer(0, rules);
world.providerConfig = createProviderConfig();
await world.mockOpenAIServer.start();
// Update provider config with actual mock server URL
world.providerConfig.baseURL = `${world.mockOpenAIServer.baseUrl}/v1`;
// Update AI settings in settings.json with the correct baseURL
const settingsPath = getSettingsPath(world);
if (fs.existsSync(settingsPath)) {
const settings = fs.readJsonSync(settingsPath) as ISettingFile;
if (settings.aiSettings?.providers?.[0]) {
settings.aiSettings.providers[0].baseURL = world.providerConfig.baseURL;
fs.writeJsonSync(settingsPath, settings, { spaces: 2 });
}
}
}
// Agent-specific Given steps
/**
* Start mock OpenAI server without any rules.
* Rules can be added later using "I add mock OpenAI responses" step.
*/
Given('I have started the mock OpenAI server without rules', function(this: ApplicationWorld, done: (error?: Error) => void) {
startMockOpenAIServerAndUpdateSettings(this, [])
.then(() => {
done();
})
.catch((error: unknown) => {
done(error as Error);
});
});
/**
* Start mock OpenAI server with predefined rules from dataTable.
* This is the legacy method used when rules are known upfront.
*/
Given('I have started the mock OpenAI server', function(this: ApplicationWorld, dataTable: DataTable | undefined, done: (error?: Error) => void) {
try {
const rules: Array<{ response: string; stream?: boolean; embedding?: number[] }> = [];
if (dataTable && typeof dataTable.raw === 'function') {
const rows = dataTable.raw();
// Skip header row
for (let index = 1; index < rows.length; index++) {
const row = rows[index];
const response = (row[0] ?? '').trim();
const stream = (row[1] ?? '').trim().toLowerCase() === 'true';
const embeddingTag = (row[2] ?? '').trim();
// Generate embedding from semantic tag if provided
let embedding: number[] | undefined;
if (embeddingTag) {
embedding = generateSemanticEmbedding(embeddingTag);
}
if (response) rules.push({ response, stream, embedding });
}
}
startMockOpenAIServerAndUpdateSettings(this, rules)
.then(() => {
done();
})
.catch((error: unknown) => {
done(error as Error);
});
} catch (error) {
done(error as Error);
}
});
/**
* Add new responses to an already-running mock OpenAI server.
* This allows scenarios to configure server responses after the application has started.
*/
Given('I add mock OpenAI responses:', function(this: ApplicationWorld, dataTable: DataTable | undefined) {
if (!this.mockOpenAIServer) {
throw new Error('Mock OpenAI server is not running. Use "I have started the mock OpenAI server" first.');
}
const rules: Array<{ response: string; stream?: boolean; embedding?: number[] }> = [];
if (dataTable && typeof dataTable.raw === 'function') {
const rows = dataTable.raw();
// Skip header row
for (let index = 1; index < rows.length; index++) {
const row = rows[index];
const response = (row[0] ?? '').trim();
const stream = (row[1] ?? '').trim().toLowerCase() === 'true';
const embeddingTag = (row[2] ?? '').trim();
// Generate embedding from semantic tag if provided
let embedding: number[] | undefined;
if (embeddingTag) {
embedding = generateSemanticEmbedding(embeddingTag);
}
if (response) rules.push({ response, stream, embedding });
}
}
this.mockOpenAIServer.addRules(rules);
});
// Mock OpenAI server cleanup - for scenarios using mock OpenAI
After({ tags: '@mockOpenAI' }, async function(this: ApplicationWorld) {
// Stop mock OpenAI server with timeout protection
if (this.mockOpenAIServer) {
try {
await Promise.race([
this.mockOpenAIServer.stop(),
new Promise<void>((resolve) => setTimeout(resolve, 2000)),
]);
} catch {
// Ignore errors during cleanup
} finally {
this.mockOpenAIServer = undefined;
}
}
});
// Only keep agent-specific steps that can't use generic ones
Then('I should see {int} messages in chat history', async function(this: ApplicationWorld, expectedCount: number) {
const currentWindow = this.currentWindow || this.mainWindow;
if (!currentWindow) {
throw new Error('No current window is available');
}
const messageSelector = '[data-testid="message-bubble"]';
await backOff(
async () => {
// Wait for at least one message to exist
await currentWindow.waitForSelector(messageSelector, { timeout: 5000 });
// Count current messages
const messages = currentWindow.locator(messageSelector);
const currentCount = await messages.count();
if (currentCount === expectedCount) {
return; // Success
} else if (currentCount > expectedCount) {
throw new Error(`Expected ${expectedCount} messages but found ${currentCount} (too many)`);
} else {
// Not enough messages yet, throw to trigger retry
throw new Error(`Expected ${expectedCount} messages but found ${currentCount}`);
}
},
BACKOFF_OPTIONS,
).catch(async (error: unknown) => {
// Get final count for error message
try {
const finalCount = await currentWindow.locator(messageSelector).count();
throw new Error(`Could not find expected ${expectedCount} messages. Found ${finalCount}. Error: ${(error as Error).message}`);
} catch {
throw new Error(`Could not find expected ${expectedCount} messages. Error: ${(error as Error).message}`);
}
});
});
Then('the last AI request should contain system prompt {string}', async function(this: ApplicationWorld, expectedPrompt: string) {
if (!this.mockOpenAIServer) {
throw new Error('Mock OpenAI server is not running');
}
const lastRequest = this.mockOpenAIServer.getLastRequest();
if (!lastRequest) {
throw new Error('No AI request has been made yet');
}
// Find system message in the request
const systemMessage = lastRequest.messages.find(message => message.role === 'system');
if (!systemMessage) {
throw new Error('No system message found in the AI request');
}
if (!systemMessage.content || !systemMessage.content.includes(expectedPrompt)) {
throw new Error(`Expected system prompt to contain "${expectedPrompt}", but got: "${systemMessage.content}"`);
}
});
Then('the last AI request should have {int} messages', async function(this: ApplicationWorld, expectedCount: number) {
if (!this.mockOpenAIServer) {
throw new Error('Mock OpenAI server is not running');
}
const lastRequest = this.mockOpenAIServer.getLastRequest();
if (!lastRequest) {
throw new Error('No AI request has been made yet');
}
const actualCount = lastRequest.messages.length;
if (actualCount !== expectedCount) {
throw new Error(`Expected ${expectedCount} messages in the AI request, but got ${actualCount}`);
}
});
// Factory function to create scenario-specific provider config
// Returns a new object each time to avoid state pollution between scenarios
function createProviderConfig(): AIProviderConfig {
return {
provider: 'TestProvider',
baseURL: 'http://127.0.0.1:0/v1', // Will be updated with actual port when mock server starts
models: [
{ name: 'test-model', features: ['language'] },
{ name: 'test-embedding-model', features: ['language', 'embedding'] },
{ name: 'test-speech-model', features: ['speech'] },
],
providerClass: 'openAICompatible',
isPreset: false,
enabled: true,
};
}
const desiredModelParameters = { temperature: 0.7, systemPrompt: 'You are a helpful assistant.', topP: 0.95 };
// Step to remove AI settings for testing config errors
Given('I remove test ai settings', function(this: ApplicationWorld) {
const settingsPath = path.resolve(process.cwd(), 'test-artifacts', this.scenarioSlug, 'userData-test', 'settings', 'settings.json');
if (fs.existsSync(settingsPath)) {
const existing = fs.readJsonSync(settingsPath) as ISettingFile;
// Remove aiSettings but keep other settings
const { aiSettings: _removed, ...rest } = existing;
fs.writeJsonSync(settingsPath, rest, { spaces: 2 });
}
});
Given('I ensure test ai settings exists', function(this: ApplicationWorld) {
const settingsPath = path.resolve(process.cwd(), 'test-artifacts', this.scenarioSlug, 'userData-test', 'settings', 'settings.json');
const parsed = fs.readJsonSync(settingsPath) as Record<string, unknown>;
const actual = (parsed.aiSettings as Record<string, unknown> | undefined) || null;
if (!actual) {
throw new Error('aiSettings not found in settings file');
}
const actualProviders = (actual.providers as Array<Record<string, unknown>>) || [];
// If providerConfig is set (from mock server), use it; otherwise create expected config
// and use actual baseURL from settings (for UI-configured scenarios)
let providerConfig: AIProviderConfig;
const providerName = 'TestProvider';
const existingProvider = actualProviders.find(p => p.provider === providerName) as AIProviderConfig | undefined;
if (this.providerConfig) {
// Use the mock server's providerConfig
providerConfig = this.providerConfig;
} else if (existingProvider) {
// For UI-configured scenarios: build expected config using actual baseURL
providerConfig = createProviderConfig();
providerConfig.baseURL = existingProvider.baseURL ?? providerConfig.baseURL;
} else {
providerConfig = createProviderConfig();
}
// Build expected aiSettings from providerConfig and compare with actual
const modelsArray = providerConfig.models;
const modelName = modelsArray[0]?.name;
// Check TestProvider exists
const testProvider = actualProviders.find(p => p.provider === providerName);
if (!testProvider) {
console.error('TestProvider not found in actual providers:', JSON.stringify(actualProviders, null, 2));
throw new Error('TestProvider not found in aiSettings');
}
// Verify TestProvider configuration
if (!isEqual(testProvider, providerConfig)) {
console.error('TestProvider config mismatch. expected:', JSON.stringify(providerConfig, null, 2));
console.error('TestProvider config actual:', JSON.stringify(testProvider, null, 2));
throw new Error('TestProvider configuration does not match expected');
}
// Check ComfyUI provider exists
const comfyuiProvider = actualProviders.find(p => p.provider === 'comfyui');
if (!comfyuiProvider) {
console.error('ComfyUI provider not found in actual providers:', JSON.stringify(actualProviders, null, 2));
throw new Error('ComfyUI provider not found in aiSettings');
}
// Verify ComfyUI has test-flux model with workflow path
const comfyuiModels = (comfyuiProvider.models as Array<Record<string, unknown>>) || [];
const testFluxModel = comfyuiModels.find(m => m.name === 'test-flux');
if (!testFluxModel) {
console.error('test-flux model not found in ComfyUI models:', JSON.stringify(comfyuiModels, null, 2));
throw new Error('test-flux model not found in ComfyUI provider');
}
// Verify workflow path
const parameters = testFluxModel.parameters as Record<string, unknown> | undefined;
if (!parameters || parameters.workflowPath !== 'C:/test/mock/workflow.json') {
console.error('Workflow path mismatch. expected: C:/test/mock/workflow.json, actual:', parameters?.workflowPath);
throw new Error('Workflow path not correctly saved');
}
// Verify default config
const defaultConfig = actual.defaultConfig as Record<string, unknown>;
const defaultModel = defaultConfig.default as Record<string, unknown>;
if (defaultModel?.provider !== providerName || defaultModel?.model !== modelName) {
console.error('Default config mismatch. expected provider:', providerName, 'model:', modelName);
console.error('actual defaultModel:', JSON.stringify(defaultModel, null, 2));
throw new Error('Default configuration does not match expected');
}
});
// Version without datatable for simple cases
Given('I add test ai settings', async function(this: ApplicationWorld) {
const settingsPath = path.resolve(process.cwd(), 'test-artifacts', this.scenarioSlug, 'userData-test', 'settings', 'settings.json');
let existing = {} as ISettingFile;
if (fs.existsSync(settingsPath)) {
existing = fs.readJsonSync(settingsPath) as ISettingFile;
} else {
fs.ensureDirSync(path.dirname(settingsPath));
}
// Initialize scenario-specific providerConfig if not set
if (!this.providerConfig) {
this.providerConfig = createProviderConfig();
}
const providerConfig = this.providerConfig;
const modelsArray = providerConfig.models;
const modelName = modelsArray[0]?.name;
const embeddingModelName = modelsArray[1]?.name;
const speechModelName = modelsArray[2]?.name;
const newAi: AIGlobalSettings = {
providers: [providerConfig],
defaultConfig: {
default: {
provider: providerConfig.provider,
model: modelName,
},
embedding: {
provider: providerConfig.provider,
model: embeddingModelName,
},
speech: {
provider: providerConfig.provider,
model: speechModelName,
},
modelParameters: desiredModelParameters,
},
};
const newPreferences = existing.preferences || {};
fs.writeJsonSync(settingsPath, { ...existing, aiSettings: newAi, preferences: newPreferences } as ISettingFile, { spaces: 2 });
});
// Version with datatable for advanced configuration
Given('I add test ai settings:', async function(this: ApplicationWorld, dataTable: DataTable) {
const settingsPath = path.resolve(process.cwd(), 'test-artifacts', this.scenarioSlug, 'userData-test', 'settings', 'settings.json');
let existing = {} as ISettingFile;
if (fs.existsSync(settingsPath)) {
existing = fs.readJsonSync(settingsPath) as ISettingFile;
} else {
fs.ensureDirSync(path.dirname(settingsPath));
}
// Initialize scenario-specific providerConfig if not set
if (!this.providerConfig) {
this.providerConfig = createProviderConfig();
}
const providerConfig = this.providerConfig;
const modelsArray = providerConfig.models;
const modelName = modelsArray[0]?.name;
const embeddingModelName = modelsArray[1]?.name;
const speechModelName = modelsArray[2]?.name;
// Parse options from data table
let freeModel: string | undefined;
let aiGenerateBackupTitle: boolean | undefined;
let aiGenerateBackupTitleTimeout: number | undefined;
if (dataTable && typeof dataTable.raw === 'function') {
const rows = dataTable.raw();
// Process all rows as key-value pairs (no header row)
for (let index = 0; index < rows.length; index++) {
const row = rows[index];
const key = (row[0] ?? '').trim();
const value = (row[1] ?? '').trim();
if (key === 'freeModel') {
// If value is 'true', enable freeModel using the same model as main model
if (value === 'true') {
freeModel = modelName;
}
} else if (key === 'aiGenerateBackupTitle') {
aiGenerateBackupTitle = value === 'true';
} else if (key === 'aiGenerateBackupTitleTimeout') {
aiGenerateBackupTitleTimeout = Number.parseInt(value, 10);
}
}
}
const newAi: AIGlobalSettings = {
providers: [providerConfig],
defaultConfig: {
default: {
provider: providerConfig.provider,
model: modelName,
},
embedding: {
provider: providerConfig.provider,
model: embeddingModelName,
},
speech: {
provider: providerConfig.provider,
model: speechModelName,
},
...(freeModel
? {
free: {
provider: providerConfig.provider,
model: freeModel,
},
}
: {}),
modelParameters: desiredModelParameters,
},
};
const newPreferences = {
...(existing.preferences || {}),
...(aiGenerateBackupTitle !== undefined ? { aiGenerateBackupTitle } : {}),
...(aiGenerateBackupTitleTimeout !== undefined ? { aiGenerateBackupTitleTimeout } : {}),
};
fs.writeJsonSync(settingsPath, { ...existing, aiSettings: newAi, preferences: newPreferences } as ISettingFile, { spaces: 2 });
});
async function clearAISettings(scenarioRoot?: string) {
const root = scenarioRoot || process.cwd();
const settingsPath = path.resolve(root, 'userData-test', 'settings', 'settings.json');
if (!(await fs.pathExists(settingsPath))) return;
const parsed = await fs.readJson(settingsPath) as ISettingFile;
const cleaned = omit(parsed, ['aiSettings']);
await fs.writeJson(settingsPath, cleaned, { spaces: 2 });
}
// Step to send ask AI with selection IPC message
When('I send ask AI with selection message with text {string} and workspace {string}', async function(this: ApplicationWorld, selectionText: string, workspaceName: string) {
const currentWindow = await this.getWindow('main');
if (!currentWindow) {
throw new Error('Main window not found');
}
// Get workspace ID from workspace name
const workspaceId = await currentWindow.evaluate(async (name: string): Promise<string | undefined> => {
// Use a narrow type view of window.service to avoid coupling to preload internals.
const windowWithService = window as unknown as { service: { workspace: { getWorkspacesAsList: () => Promise<IWorkspace[]> } } };
const workspaces = await windowWithService.service.workspace.getWorkspacesAsList();
const workspace = workspaces.find((ws) => ws.name === name);
return workspace?.id;
}, workspaceName);
if (!workspaceId) {
throw new Error(`Workspace with name "${workspaceName}" not found`);
}
// Send IPC message to trigger "Talk with AI" through main process
// Use app.evaluate to access Electron main process API
if (!this.app) {
throw new Error('Electron app not found');
}
const sendResult = await this.app.evaluate(async ({ BrowserWindow }, { text, wsId }: { text: string; wsId: string }) => {
// Find main window - the first window is always the main window in TidGi
const allWindows = BrowserWindow.getAllWindows();
const mainWindow = allWindows[0]; // Main window is always the first window created
if (!mainWindow) {
return { success: false, error: 'No windows found', windowCount: allWindows.length };
}
const data = {
selectionText: text,
wikiUrl: `tidgi://${wsId}`,
workspaceId: wsId,
};
// Send IPC message to renderer
mainWindow.webContents.send('ask-ai-with-selection', data);
return { success: true };
}, { text: selectionText, wsId: workspaceId });
if (!sendResult.success) {
throw new Error(`Failed to send IPC message: ${sendResult.error || 'Unknown error'}`);
}
// Small delay to ensure IPC message is processed (cross-process communication needs time)
await new Promise(resolve => setTimeout(resolve, 200));
});
export { clearAISettings };