TidGi-Desktop/features/stepDefinitions/agent.ts
lin onetwo 4c5e1d16c7
Fix/misc bug (#691)
* Use git service for backups and dynamic AI menus

Switch backup actions to call gitService.commitAndSync(commitOnly) so local backups work without remote auth and AI commit generation is triggered by omitting commitMessage. Make AI-related menu items always registered but use dynamic visibility/enabled checks (isAIEnabled) so they appear/disappear at runtime. Update menu item types/imports accordingly. Optimize workspace persistence to write only the single updated workspace to settings.json (stripping syncable fields when tidgi.config exists) instead of saving all workspaces; remove the old saveWorkspacesToSettings method. Add warnings/logging: warn if git worker observable is undefined, log/notify when cloud sync is skipped due to missing auth/gitUrl. Misc: remove a redundant debug log in tidgiConfig, remove native process monitoring startup call, include commitMessage for CommitDetailsPanel sync, and drop entriesFingerprint/debug noise from git log data.

* fix: avoid rewriting unchanged workspace config

* fix: separate plain and ai backup flows

* feat: add searchable settings views

* fix: narrow sync adaptor revision type

* chore: release tidgi-shared 0.1.3

* Preferences: unify Scheduled/Background tasks and fix skeleton nav

- Remove legacy background task UI/dialogs (use ScheduledTask unified system)
- Attach invisible anchors to skeleton placeholders so sidebar scrollIntoView works while loading
- Add English/zh translation keys for AddAlarm/AddHeartbeat
- Add requestIdleCallback polyfill (tests) and speed up to 0ms for tests
- Search: match English translations (txEn) in SearchResultsView

* Fix view zero-size on minimize; realign on restore/show

Guard view bounds from 0x0 content size to prevent BrowserView disappearing when window is minimized; fall back to safe offscreen size. Add window 'restore' and 'show' handlers to realign views. Add getMemoryUsage() in wiki worker and expose RSS/heap via getWorkersInfo; show worker memory in Developer Tools diagnostics.

* Fix background/quit behavior: synchronous close handler; platform-specific runOnBackground default; add getWindowMetaSync; add forced-exit timeout for before-quit cleanup

* Refactor preferences: add definitions and registry

Add a new structured preferences system: introduce definition schemas, typed item/section types, explicit section files, a registry (allSections/sectionById), side effects, action handlers, and helper builders (zodPreferencesSchema). Add custom preference UI items and registration (customItems, registerCustomSections) and tests validating schemas. Replace previous zod/settings schema files with the new definitions and make IPreferences an explicit TypeScript interface. Small UI updates: use LanguageSelectorItem and WikiUserNameItem in Guide and Help pages. Also remove net.isOnline() pre-checks from Git.commitAndSync and Git.forcePull to avoid false-negative network detections.

* Fix blank wiki after hide+reopen: re-attach orphaned views on realign

Root cause: three compounding issues when the main window is hidden and then re-shown via a second-instance shortcut click.

1. getView() now auto-removes stale entries whose webContents.isDestroyed() == true.
   This allows addView() / showWorkspaceView() to recreate destroyed views instead of
   silently skipping them (which left the new window blank).

2. realignView() now calls browserWindow.contentView.addChildView(view) before setBounds().
   If a view survived window destruction but became orphaned (detached from its parent
   BrowserWindow), re-attaching it makes it visible again.  addChildView is idempotent
   so normal re-entrant calls are safe.

3. window.open() existing-window branch now calls addViewForAllBrowserViews(activeWorkspace)
   before realignActiveWorkspace(). This ensures that any destroyed/missing view is
   recreated BEFORE the realign attempts to reposition it.  The call is a no-op when
   views already exist and are healthy.

* Fix blank WebContentsView after restoring hidden window (Windows bug)

Electron on Windows sometimes fails to repaint a WebContentsView that remains attached to a window that is hidden and then shown again. By unconditionally calling \
emoveChildView\ followed by \ddChildView\ during \
ealignView\ and \showView\, we force the Chromium compositor to reparent and paint the view correctly, ensuring the Wiki becomes visible as soon as the user restores the app from the background.

* Fix blank WebView on window restore: show view before realign, fix same-workspace click

Two root causes identified and fixed:

1. openWorkspaceTiddler silently skipped setActiveWorkspaceView when the user
   clicked the already-active workspace icon (guard: oldId !== newId).  The view
   was blank and clicking its icon did nothing.  Guard removed  setActiveWorkspaceView
   is now always called; it is safe with the same ID (hibernation guard is already
   correct for that case).

2. The 'show' event handler and window.open() existing-window path were calling
   realignActiveWorkspace() which only calls setBounds.  On Windows, when a window
   transitions from hidden/background to visible the Chromium compositor may not
   repaint a WebContentsView whose bounds have not changed.  Both paths now call
   refreshActiveWorkspaceView()  a new lightweight helper that calls showView()
   (removeChildView + addChildView + setBounds + webContents.focus) before realigning.
   This forces a compositor repaint and makes the wiki page visible immediately.

* Refactor view restore chain: clean up redundant repaint/realign calls

Full chain analysis identified 4 structural problems:

1. refreshActiveWorkspaceView() called TWICE concurrently on window restore:
   window.open() called existedWindow.show() which fired the 'show' event
    refreshActiveWorkspaceView() AND THEN immediately called refreshActiveWorkspaceView()
   again explicitly, creating a race condition on removeChildView+addChildView.

2. realignView() contained removeChildView+addChildView, which ran immediately after
   showView() already did removeChildView+addChildView+setBounds+focus.  The second
   remove+add clobbered the focus state set by showView, breaking keyboard focus.

3. setActiveWorkspaceView() called showWorkspaceView + realignActiveWorkspace, meaning
   the view was remove+add+setBounds+focused by showView, then immediately remove+add+
   setBounds-without-focus again by realignView.  Double bounds, lost focus.

4. Same pattern in refreshActiveWorkspaceView: showWorkspaceView + realignActiveWorkspace.

Clean design after refactor:
- showView()        = force-repaint path: remove+add+setBounds+focus (unchanged)
- realignView()     = bounds-only:        setBounds ONLY, no remove+add
- showWorkspaceView = calls showView for main+mini windows
- realignActiveWorkspace = calls realignView (now just setBounds) + buildMenu;
                     used for fullscreen/sidebar/resize events
- setActiveWorkspaceView = showWorkspaceView + buildMenu (not +realignActiveWorkspace)
- refreshActiveWorkspaceView = showWorkspaceView + buildMenu (not +realignActiveWorkspace);
                     called from 'show' window event (fire-and-forget: no rethrow)
- window.open() existing-window = show() only; 'show' event handler calls
                     refreshActiveWorkspaceView automatically, no duplicate call

* chore: bump electron-ipc-cat to 2.4.0

Rolling Observable timeout (120s initial, 60s idle) fixes git-upload-pack
timeout for large repos (100+ MB) during mobile sync.

* style: unify layout between Preferences and EditWorkspace

Use PageRoot and PageInner from PreferenceComponents to eliminate subtle padding/background differences. Resize EditWorkspace window to match Preferences. Clean up lint errors.

* Add E2E test for window-restore blank-view bug + log markers

Two changes:

1. Log markers added to aid diagnosis and enable E2E verification:
   - [test-id-VIEW_SHOWN]             in ViewService.showView()
   - [test-id-REFRESH_ACTIVE_VIEW_START/DONE] in WorkspaceView.refreshActiveWorkspaceView()

2. New E2E feature: features/windowRestore.feature
   Scenario 1: 'Wiki WebContentsView is visible immediately after restoring hidden window'
     - hides main window (same path as close+runOnBackground)
     - triggers second-instance via app.emit('second-instance')
     - asserts [test-id-REFRESH_ACTIVE_VIEW_DONE] and [test-id-VIEW_SHOWN] log markers
     - asserts browser view is within visible window bounds
     - asserts wiki content is readable
   Scenario 2: 'Clicking already-active workspace icon re-shows the WebContentsView'
     - verifies the removed oldId !== newId guard: clicking current workspace must
       now call setActiveWorkspaceView which fires showView

   Two step definitions added to features/stepDefinitions/application.ts:
   - 'I hide the main window as if closing with runOnBackground'
     calls BrowserWindow.hide() directly in main process
   - 'I reopen the main window as second instance would'
     emits app 'second-instance' event in main process

* Fix E2E test: correct second-instance emit args, add wiki-ready wait in Background

Three issues found and fixed by running the tests:

1. app.emit('second-instance') argument order wrong
   DeepLinkService listener: (_event, commandLine) => commandLine.pop()
   Our emit: app.emit('second-instance', [], process.cwd(), {})
   This made 'process.cwd()' land in commandLine, .pop() failed on a string.
   Fix: app.emit('second-instance', {}, [], '', {})  fake Event first,
   then empty argv array, then workingDirectory.

2. In test mode, window.open() skips existedWindow.show() to avoid UI popups.
   The 'show' event never fired so refreshActiveWorkspaceView was never called
   and the window stayed hidden from Playwright's perspective.
   Fix: explicitly call mainWindow.show() via app.evaluate() after emitting
   second-instance, replicating what production window.open() does.

3. Background used 'the browser view should be loaded and visible' which has
   a 21-second timeout and fails before TiddlyWiki finishes initializing in
   the test environment (pre-existing issue in defaultWiki.feature too).
   Fix: replaced with deterministic log marker waits:
     [test-id-WIKI_WORKER_STARTED] + [test-id-VIEW_LOADED]
   plus 'I confirm the main window browser view is positioned within visible
   window bounds' for a structural check without content dependency.

Result: both @window-restore scenarios pass (31/31 steps green, ~48s).

* Fix reopened main window restore after recreation and rebind view resize

Root cause on Windows was not the hide/show path, but the close+recreate path when tidgi mini window keeps the app alive while runOnBackground is false.

What was actually happening:
1. The user closed the main window.
2. The app stayed alive because tidgi mini window still existed.
3. A second-instance launch recreated a new main BrowserWindow.
4. The old workspace WebContentsView still existed in ViewService.
5. But the new main window missed the automatic restore because the BrowserWindow 'show' event fired inside handleCreateBasicWindow() before registerBrowserViewWindowListeners() attached the 'show' listener.
6. If the user then clicked the workspace icon, showView() reattached the old view manually, but its resize listener was still bound to the old destroyed BrowserWindow, so resizing the new window no longer resized the view.

Fix:
- ViewService now rebinds the debounced resize handler every time showView() attaches an existing view to a BrowserWindow.
- Window.open() now detects the recreate-main-window case for BrowserView windows and immediately calls refreshActiveWorkspaceView() if the active workspace already has an existing view instance.
  This restores the view without waiting for a workspace icon click.

Why old E2E missed it:
- It simulated hide/show (runOnBackground=true) instead of the real user path (main window close + app kept alive by tidgi mini window).
- It only checked that the view was within visible bounds; it did not resize the window and assert the view filled the content area after the reopen.

New E2E coverage:
- Configures tidgiMiniWindow=true and runOnBackground=false before launch.
- Closes the main window, reopens it via second-instance, verifies refresh/view-shown markers, verifies bounds, resizes the recreated main window, and asserts the BrowserView fills the content area after the debounced resize handler runs.
- Scenario passes locally: 1 scenario, 20 steps, all green.

* Update pnpm-lock.yaml

* fix: address Copilot PR review issues

- Restore workspaceID from window.meta() in EditWorkspace (was hard-coded debug value)
- Add missing React/type imports to customComponentRegistry.ts, workspaceCustomComponentRegistry.ts, registerCustomSections.tsx, registerWorkspaceCustomSections.tsx, useSections.ts
- Fix HighlightText regex: use index parity (odd index = match) instead of stateful regex.test() with global flag
- Fix actionHandlers native.pickDirectory to read current preference value instead of passing the key string as a path
- Move PreferenceComponents import before registerCustomSections() call to fix import ordering

* fix: fix import ordering to satisfy dprint/eslint format rules

* fix: stabilize e2e selectors and EditWorkspace loading fallback

- align workspace section testids in e2e features
- migrate background-task e2e to scheduled-task selectors
- add edit workspace fallback loading when metadata/observable is late
- add deterministic switch testid for schema boolean items
- make sync snackbar assertion resilient to progress text changes
- clear draft-check timeout handle in sync service

* fix: add apiKey to test provider config so isAIAvailable() returns true

The AI commit message e2e test expects both commit-now-button and
commit-now-ai-button to appear. The AI button only renders when
isAIGenerateBackupTitleEnabled() returns true, which internally calls
externalAPIService.isAIAvailable(). That method requires a non-empty
apiKey for openAICompatible providers, but the test's
createProviderConfig() never set one, causing isAIAvailable() to
return false and the AI button to never render.

* feat(gitServer): add generateFullArchive for fast mobile clone

- Add generateFullArchive() to IGitServerService interface
- Implement tar archive generation: git archive + system tar append
- Archives working tree + minimal .git metadata (HEAD, refs, objects)
- Cache by HEAD commit hash, auto-cleanup old archives
- Bump tidgi-shared to 0.1.5

* fix(e2e): resolve workspace by runtime name/folder in step defs

* fix(ci): satisfy lint rules in gitServer archive generation
2026-04-01 15:45:26 +08:00

652 lines
25 KiB
TypeScript

import { After, DataTable, Given, Then, When } from '@cucumber/cucumber';
import { AIGlobalSettings, AIProviderConfig } from '@services/externalAPI/interface';
import type { IWorkspace } from '@services/workspaces/interface';
import { backOff } from 'exponential-backoff';
import fs from 'fs-extra';
import { isEqual, omit } from 'lodash';
import path from 'path';
import type { ISettingFile } from '../../src/services/database/interface';
import { MockOpenAIServer } from '../supports/mockOpenAI';
import { getSettingsPath } from '../supports/paths';
import { PLAYWRIGHT_SHORT_TIMEOUT } from '../supports/timeouts';
import type { ApplicationWorld } from './application';
// Backoff configuration for retries
const BACKOFF_OPTIONS = {
numOfAttempts: 10,
startingDelay: 200,
timeMultiple: 1.5,
};
/**
* Generate deterministic embedding vector based on a semantic tag
* This allows us to control similarity in tests without writing full 384-dim vectors
*
* Strategy:
* - Similar tags (note1, note1-similar) -> similar vectors (high similarity)
* - Different tags (note1, note2) -> different vectors (medium similarity)
* - Unrelated tags (note1, unrelated) -> very different vectors (low similarity)
*/
function generateSemanticEmbedding(tag: string): number[] {
const vector: number[] = [];
// Parse tag to determine semantic relationship
// Format: "note1", "note2", "query-note1", "unrelated"
const baseTag = tag.replace(/-similar$/, '').replace(/^query-/, '');
const isSimilar = tag.includes('-similar');
const isQuery = tag.startsWith('query-');
const isUnrelated = tag === 'unrelated';
// Generate base vector from tag
const seed = Array.from(baseTag).reduce((hash, char) => {
return ((hash << 5) - hash) + char.charCodeAt(0);
}, 0);
for (let dimension = 0; dimension < 384; dimension++) {
const x = Math.sin((seed + dimension) * 0.1) * 10000;
let value = x - Math.floor(x);
// Adjust vector based on semantic relationship
if (isUnrelated) {
// Completely different direction
value = -value;
} else if (isSimilar || isQuery) {
// Very similar (>95% similarity) - add small noise
value = value + (Math.sin(dimension * 0.01) * 0.05);
}
// Normalize to [-1, 1]
vector.push(value * 2 - 1);
}
return vector;
}
// Helper function to start mock OpenAI server and update settings
async function startMockOpenAIServerAndUpdateSettings(
world: ApplicationWorld,
rules: Array<{ response: string; stream?: boolean; embedding?: number[] }>,
): Promise<void> {
// Use dynamic port (0) to allow parallel test execution
world.mockOpenAIServer = new MockOpenAIServer(0, rules);
world.providerConfig = createProviderConfig();
await world.mockOpenAIServer.start();
// Update provider config with actual mock server URL
world.providerConfig.baseURL = `${world.mockOpenAIServer.baseUrl}/v1`;
// Update AI settings in settings.json with the correct baseURL
const settingsPath = getSettingsPath(world);
if (fs.existsSync(settingsPath)) {
const settings = fs.readJsonSync(settingsPath) as ISettingFile;
if (settings.aiSettings?.providers?.[0]) {
settings.aiSettings.providers[0].baseURL = world.providerConfig.baseURL;
fs.writeJsonSync(settingsPath, settings, { spaces: 2 });
}
}
}
// Agent-specific Given steps
/**
* Start mock OpenAI server without any rules.
* Rules can be added later using "I add mock OpenAI responses" step.
*/
Given('I have started the mock OpenAI server without rules', function(this: ApplicationWorld, done: (error?: Error) => void) {
startMockOpenAIServerAndUpdateSettings(this, [])
.then(() => {
done();
})
.catch((error: unknown) => {
done(error as Error);
});
});
/**
* Start mock OpenAI server with predefined rules from dataTable.
* This is the legacy method used when rules are known upfront.
*/
Given('I have started the mock OpenAI server', function(this: ApplicationWorld, dataTable: DataTable | undefined, done: (error?: Error) => void) {
try {
const rules: Array<{ response: string; stream?: boolean; embedding?: number[] }> = [];
if (dataTable && typeof dataTable.raw === 'function') {
const rows = dataTable.raw();
// Skip header row
for (let index = 1; index < rows.length; index++) {
const row = rows[index];
const response = (row[0] ?? '').trim();
const stream = (row[1] ?? '').trim().toLowerCase() === 'true';
const embeddingTag = (row[2] ?? '').trim();
// Generate embedding from semantic tag if provided
let embedding: number[] | undefined;
if (embeddingTag) {
embedding = generateSemanticEmbedding(embeddingTag);
}
// Include rules with a response OR an embedding — MockOpenAIServer separates them into chatRules vs embeddingRules internally
if (response || embedding) rules.push({ response, stream, embedding });
}
}
startMockOpenAIServerAndUpdateSettings(this, rules)
.then(() => {
done();
})
.catch((error: unknown) => {
done(error as Error);
});
} catch (error) {
done(error as Error);
}
});
/**
* Add new responses to an already-running mock OpenAI server.
* This allows scenarios to configure server responses after the application has started.
*/
Given('I add mock OpenAI responses:', function(this: ApplicationWorld, dataTable: DataTable | undefined) {
if (!this.mockOpenAIServer) {
throw new Error('Mock OpenAI server is not running. Use "I have started the mock OpenAI server" first.');
}
const rules: Array<{ response: string; stream?: boolean; embedding?: number[] }> = [];
if (dataTable && typeof dataTable.raw === 'function') {
const rows = dataTable.raw();
// Skip header row
for (let index = 1; index < rows.length; index++) {
const row = rows[index];
const response = (row[0] ?? '').trim();
const stream = (row[1] ?? '').trim().toLowerCase() === 'true';
const embeddingTag = (row[2] ?? '').trim();
// Generate embedding from semantic tag if provided
let embedding: number[] | undefined;
if (embeddingTag) {
embedding = generateSemanticEmbedding(embeddingTag);
}
// Include rules with a response OR an embedding — MockOpenAIServer separates them into chatRules vs embeddingRules internally
if (response || embedding) rules.push({ response, stream, embedding });
}
}
this.mockOpenAIServer.addRules(rules);
});
// Mock OpenAI server cleanup - for scenarios using mock OpenAI
After({ tags: '@mockOpenAI' }, async function(this: ApplicationWorld) {
// Stop mock OpenAI server with timeout protection
if (this.mockOpenAIServer) {
try {
await Promise.race([
this.mockOpenAIServer.stop(),
new Promise<void>((resolve) => setTimeout(resolve, 2000)),
]);
} catch {
// Ignore errors during cleanup
} finally {
this.mockOpenAIServer = undefined;
}
}
});
// Only keep agent-specific steps that can't use generic ones
Then('I should see {int} messages in chat history', async function(this: ApplicationWorld, expectedCount: number) {
const currentWindow = this.currentWindow || this.mainWindow;
if (!currentWindow) {
throw new Error('No current window is available');
}
const messageSelector = '[data-testid="message-bubble"]';
await backOff(
async () => {
// Wait for at least one message to exist
await currentWindow.waitForSelector(messageSelector, { timeout: PLAYWRIGHT_SHORT_TIMEOUT });
// Count current messages
const messages = currentWindow.locator(messageSelector);
const currentCount = await messages.count();
if (currentCount === expectedCount) {
return; // Success
} else if (currentCount > expectedCount) {
throw new Error(`Expected ${expectedCount} messages but found ${currentCount} (too many)`);
} else {
// Not enough messages yet, throw to trigger retry
throw new Error(`Expected ${expectedCount} messages but found ${currentCount}`);
}
},
BACKOFF_OPTIONS,
).catch(async (error: unknown) => {
// Get final count for error message
try {
const finalCount = await currentWindow.locator(messageSelector).count();
throw new Error(`Could not find expected ${expectedCount} messages. Found ${finalCount}. Error: ${(error as Error).message}`);
} catch {
throw new Error(`Could not find expected ${expectedCount} messages. Error: ${(error as Error).message}`);
}
});
});
Then('the last AI request should contain system prompt {string}', async function(this: ApplicationWorld, expectedPrompt: string) {
if (!this.mockOpenAIServer) {
throw new Error('Mock OpenAI server is not running');
}
const lastRequest = this.mockOpenAIServer.getLastRequest();
if (!lastRequest) {
throw new Error('No AI request has been made yet');
}
// Find system message in the request
const systemMessage = lastRequest.messages.find(message => message.role === 'system');
if (!systemMessage) {
throw new Error('No system message found in the AI request');
}
if (!systemMessage.content || !systemMessage.content.includes(expectedPrompt)) {
throw new Error(`Expected system prompt to contain "${expectedPrompt}", but got: "${systemMessage.content}"`);
}
});
Then('the last AI request system prompt should not contain {string}', async function(this: ApplicationWorld, unexpectedText: string) {
if (!this.mockOpenAIServer) {
throw new Error('Mock OpenAI server is not running');
}
const lastRequest = this.mockOpenAIServer.getLastRequest();
if (!lastRequest) {
throw new Error('No AI request has been made yet');
}
const systemMessage = lastRequest.messages.find(message => message.role === 'system');
if (!systemMessage) {
// No system message means it definitely doesn't contain the text
return;
}
if (systemMessage.content && systemMessage.content.includes(unexpectedText)) {
throw new Error(`Expected system prompt NOT to contain "${unexpectedText}", but it was found in: "${systemMessage.content.substring(0, 300)}..."`);
}
});
Then('the last AI request should have {int} messages', async function(this: ApplicationWorld, expectedCount: number) {
if (!this.mockOpenAIServer) {
throw new Error('Mock OpenAI server is not running');
}
const lastRequest = this.mockOpenAIServer.getLastRequest();
if (!lastRequest) {
throw new Error('No AI request has been made yet');
}
const actualCount = lastRequest.messages.length;
if (actualCount !== expectedCount) {
throw new Error(`Expected ${expectedCount} messages in the AI request, but got ${actualCount}`);
}
});
Then('the last AI request user message should contain {string}', async function(this: ApplicationWorld, expectedText: string) {
if (!this.mockOpenAIServer) {
throw new Error('Mock OpenAI server is not running');
}
// Poll for the request to arrive — there can be a delay between pressing Enter
// and the mock server actually receiving the HTTP request.
const lastRequest = await backOff(
async () => {
const request = this.mockOpenAIServer!.getLastRequest();
if (!request) throw new Error('No AI request has been made yet');
return request;
},
{ numOfAttempts: 40, startingDelay: 250, timeMultiple: 1, maxDelay: 250, delayFirstAttempt: true },
);
// Find the last user message in the request
const userMessages = lastRequest.messages.filter(message => message.role === 'user');
if (userMessages.length === 0) {
throw new Error('No user message found in the AI request');
}
const lastUserMessage = userMessages[userMessages.length - 1];
const content = lastUserMessage.content ?? '';
const normalizedExpectedText = expectedText.replaceAll('\\n', '\n');
const contentHasExpectedText = content.includes(expectedText) || content.includes(normalizedExpectedText);
if (!contentHasExpectedText) {
throw new Error(`Expected user message to contain "${expectedText}", but got: "${content}"`);
}
});
Then('the last AI request user message should not contain {string}', async function(this: ApplicationWorld, unexpectedText: string) {
if (!this.mockOpenAIServer) {
throw new Error('Mock OpenAI server is not running');
}
const lastRequest = this.mockOpenAIServer.getLastRequest();
if (!lastRequest) {
throw new Error('No AI request has been made yet');
}
// Find the last user message in the request
const userMessages = lastRequest.messages.filter(message => message.role === 'user');
if (userMessages.length === 0) {
throw new Error('No user message found in the AI request');
}
const lastUserMessage = userMessages[userMessages.length - 1];
if (lastUserMessage.content && lastUserMessage.content.includes(unexpectedText)) {
throw new Error(`Expected user message NOT to contain "${unexpectedText}", but it was found in: "${lastUserMessage.content.substring(0, 200)}..."`);
}
});
// Factory function to create scenario-specific provider config
// Returns a new object each time to avoid state pollution between scenarios
function createProviderConfig(): AIProviderConfig {
return {
provider: 'TestProvider',
baseURL: 'http://127.0.0.1:0/v1', // Will be updated with actual port when mock server starts
apiKey: 'test-api-key', // Required by isAIAvailable() for non-Ollama providers
models: [
{ name: 'test-model', features: ['language'] },
{ name: 'test-embedding-model', features: ['language', 'embedding'] },
{ name: 'test-speech-model', features: ['speech'] },
],
providerClass: 'openAICompatible',
isPreset: false,
enabled: true,
};
}
const desiredModelParameters = { temperature: 0.7, systemPrompt: 'You are a helpful assistant.', topP: 0.95 };
// Step to remove AI settings for testing config errors
Given('I remove test ai settings', function(this: ApplicationWorld) {
const settingsPath = path.resolve(process.cwd(), 'test-artifacts', this.scenarioSlug, 'userData-test', 'settings', 'settings.json');
if (fs.existsSync(settingsPath)) {
const existing = fs.readJsonSync(settingsPath) as ISettingFile;
// Remove aiSettings but keep other settings
const { aiSettings: _removed, ...rest } = existing;
fs.writeJsonSync(settingsPath, rest, { spaces: 2 });
}
});
Given('I ensure test ai settings exists', function(this: ApplicationWorld) {
const settingsPath = path.resolve(process.cwd(), 'test-artifacts', this.scenarioSlug, 'userData-test', 'settings', 'settings.json');
const parsed = fs.readJsonSync(settingsPath) as Record<string, unknown>;
const actual = (parsed.aiSettings as Record<string, unknown> | undefined) || null;
if (!actual) {
throw new Error('aiSettings not found in settings file');
}
const actualProviders = (actual.providers as Array<Record<string, unknown>>) || [];
// If providerConfig is set (from mock server), use it; otherwise create expected config
// and use actual baseURL from settings (for UI-configured scenarios)
let providerConfig: AIProviderConfig;
const providerName = 'TestProvider';
const existingProvider = actualProviders.find(p => p.provider === providerName) as AIProviderConfig | undefined;
if (this.providerConfig) {
// Use the mock server's providerConfig
providerConfig = this.providerConfig;
} else if (existingProvider) {
// For UI-configured scenarios: build expected config using actual baseURL
providerConfig = createProviderConfig();
providerConfig.baseURL = existingProvider.baseURL ?? providerConfig.baseURL;
// UI-created providers won't have an apiKey — align expected with actual
if (!existingProvider.apiKey) {
delete (providerConfig as unknown as Record<string, unknown>).apiKey;
}
} else {
providerConfig = createProviderConfig();
}
// Build expected aiSettings from providerConfig and compare with actual
const modelsArray = providerConfig.models;
const modelName = modelsArray[0]?.name;
// Check TestProvider exists
const testProvider = actualProviders.find(p => p.provider === providerName);
if (!testProvider) {
console.error('TestProvider not found in actual providers:', JSON.stringify(actualProviders, null, 2));
throw new Error('TestProvider not found in aiSettings');
}
// Verify TestProvider configuration
if (!isEqual(testProvider, providerConfig)) {
console.error('TestProvider config mismatch. expected:', JSON.stringify(providerConfig, null, 2));
console.error('TestProvider config actual:', JSON.stringify(testProvider, null, 2));
throw new Error('TestProvider configuration does not match expected');
}
// Check ComfyUI provider exists
const comfyuiProvider = actualProviders.find(p => p.provider === 'comfyui');
if (!comfyuiProvider) {
console.error('ComfyUI provider not found in actual providers:', JSON.stringify(actualProviders, null, 2));
throw new Error('ComfyUI provider not found in aiSettings');
}
// Verify ComfyUI has test-flux model with workflow path
const comfyuiModels = (comfyuiProvider.models as Array<Record<string, unknown>>) || [];
const testFluxModel = comfyuiModels.find(m => m.name === 'test-flux');
if (!testFluxModel) {
console.error('test-flux model not found in ComfyUI models:', JSON.stringify(comfyuiModels, null, 2));
throw new Error('test-flux model not found in ComfyUI provider');
}
// Verify workflow path
const parameters = testFluxModel.parameters as Record<string, unknown> | undefined;
if (!parameters || parameters.workflowPath !== 'C:/test/mock/workflow.json') {
console.error('Workflow path mismatch. expected: C:/test/mock/workflow.json, actual:', parameters?.workflowPath);
throw new Error('Workflow path not correctly saved');
}
// Verify default config
const defaultConfig = actual.defaultConfig as Record<string, unknown>;
const defaultModel = defaultConfig.default as Record<string, unknown>;
if (defaultModel?.provider !== providerName || defaultModel?.model !== modelName) {
console.error('Default config mismatch. expected provider:', providerName, 'model:', modelName);
console.error('actual defaultModel:', JSON.stringify(defaultModel, null, 2));
throw new Error('Default configuration does not match expected');
}
});
// Version without datatable for simple cases
Given('I add test ai settings', async function(this: ApplicationWorld) {
const settingsPath = path.resolve(process.cwd(), 'test-artifacts', this.scenarioSlug, 'userData-test', 'settings', 'settings.json');
let existing = {} as ISettingFile;
if (fs.existsSync(settingsPath)) {
existing = fs.readJsonSync(settingsPath) as ISettingFile;
} else {
fs.ensureDirSync(path.dirname(settingsPath));
}
// Initialize scenario-specific providerConfig if not set
if (!this.providerConfig) {
this.providerConfig = createProviderConfig();
}
const providerConfig = this.providerConfig;
const modelsArray = providerConfig.models;
const modelName = modelsArray[0]?.name;
const embeddingModelName = modelsArray[1]?.name;
const speechModelName = modelsArray[2]?.name;
const newAi: AIGlobalSettings = {
providers: [providerConfig],
defaultConfig: {
default: {
provider: providerConfig.provider,
model: modelName,
},
embedding: {
provider: providerConfig.provider,
model: embeddingModelName,
},
speech: {
provider: providerConfig.provider,
model: speechModelName,
},
modelParameters: desiredModelParameters,
},
};
const newPreferences = existing.preferences || {};
fs.writeJsonSync(settingsPath, { ...existing, aiSettings: newAi, preferences: newPreferences } as ISettingFile, { spaces: 2 });
});
// Version with datatable for advanced configuration
Given('I add test ai settings:', async function(this: ApplicationWorld, dataTable: DataTable) {
const settingsPath = path.resolve(process.cwd(), 'test-artifacts', this.scenarioSlug, 'userData-test', 'settings', 'settings.json');
let existing = {} as ISettingFile;
if (fs.existsSync(settingsPath)) {
existing = fs.readJsonSync(settingsPath) as ISettingFile;
} else {
fs.ensureDirSync(path.dirname(settingsPath));
}
// Initialize scenario-specific providerConfig if not set
if (!this.providerConfig) {
this.providerConfig = createProviderConfig();
}
const providerConfig = this.providerConfig;
const modelsArray = providerConfig.models;
const modelName = modelsArray[0]?.name;
const embeddingModelName = modelsArray[1]?.name;
const speechModelName = modelsArray[2]?.name;
// Parse options from data table
let freeModel: string | undefined;
let aiGenerateBackupTitle: boolean | undefined;
let aiGenerateBackupTitleTimeout: number | undefined;
if (dataTable && typeof dataTable.raw === 'function') {
const rows = dataTable.raw();
// Process all rows as key-value pairs (no header row)
for (let index = 0; index < rows.length; index++) {
const row = rows[index];
const key = (row[0] ?? '').trim();
const value = (row[1] ?? '').trim();
if (key === 'freeModel') {
// If value is 'true', enable freeModel using the same model as main model
if (value === 'true') {
freeModel = modelName;
}
} else if (key === 'aiGenerateBackupTitle') {
aiGenerateBackupTitle = value === 'true';
} else if (key === 'aiGenerateBackupTitleTimeout') {
aiGenerateBackupTitleTimeout = Number.parseInt(value, 10);
}
}
}
const newAi: AIGlobalSettings = {
providers: [providerConfig],
defaultConfig: {
default: {
provider: providerConfig.provider,
model: modelName,
},
embedding: {
provider: providerConfig.provider,
model: embeddingModelName,
},
speech: {
provider: providerConfig.provider,
model: speechModelName,
},
...(freeModel
? {
free: {
provider: providerConfig.provider,
model: freeModel,
},
}
: {}),
modelParameters: desiredModelParameters,
},
};
const newPreferences = {
...(existing.preferences || {}),
...(aiGenerateBackupTitle !== undefined ? { aiGenerateBackupTitle } : {}),
...(aiGenerateBackupTitleTimeout !== undefined ? { aiGenerateBackupTitleTimeout } : {}),
};
fs.writeJsonSync(settingsPath, { ...existing, aiSettings: newAi, preferences: newPreferences } as ISettingFile, { spaces: 2 });
});
async function clearAISettings(scenarioRoot?: string) {
const root = scenarioRoot || process.cwd();
const settingsPath = path.resolve(root, 'userData-test', 'settings', 'settings.json');
if (!(await fs.pathExists(settingsPath))) return;
const parsed = await fs.readJson(settingsPath) as ISettingFile;
const cleaned = omit(parsed, ['aiSettings']);
await fs.writeJson(settingsPath, cleaned, { spaces: 2 });
}
// Step to send ask AI with selection IPC message
When('I send ask AI with selection message with text {string} and workspace {string}', async function(this: ApplicationWorld, selectionText: string, workspaceName: string) {
const currentWindow = await this.getWindow('main');
if (!currentWindow) {
throw new Error('Main window not found');
}
// Get workspace ID from workspace name
const workspaceId = await currentWindow.evaluate(async (name: string): Promise<string | undefined> => {
// Use a narrow type view of window.service to avoid coupling to preload internals.
const windowWithService = window as unknown as { service: { workspace: { getWorkspacesAsList: () => Promise<IWorkspace[]> } } };
const workspaces = await windowWithService.service.workspace.getWorkspacesAsList();
const workspace = workspaces.find((ws) => ws.name === name);
return workspace?.id;
}, workspaceName);
if (!workspaceId) {
throw new Error(`Workspace with name "${workspaceName}" not found`);
}
// Send IPC message to trigger "Talk with AI" through main process
// Use app.evaluate to access Electron main process API
if (!this.app) {
throw new Error('Electron app not found');
}
const sendResult = await this.app.evaluate(async ({ BrowserWindow }, { text, wsId }: { text: string; wsId: string }) => {
// Find main window - the first window is always the main window in TidGi
const allWindows = BrowserWindow.getAllWindows();
const mainWindow = allWindows[0]; // Main window is always the first window created
if (!mainWindow) {
return { success: false, error: 'No windows found', windowCount: allWindows.length };
}
const data = {
selectionText: text,
wikiUrl: `tidgi://${wsId}`,
workspaceId: wsId,
};
// Send IPC message to renderer
mainWindow.webContents.send('ask-ai-with-selection', data);
return { success: true };
}, { text: selectionText, wsId: workspaceId });
if (!sendResult.success) {
throw new Error(`Failed to send IPC message: ${sendResult.error || 'Unknown error'}`);
}
// Small delay to ensure IPC message is processed (cross-process communication needs time)
await new Promise(resolve => setTimeout(resolve, 200));
});
export { clearAISettings };