ciaochris commited on
Commit
1ad671f
·
1 Parent(s): c901b5f

Redesign Rhythma as a session-led listening experience

Browse files

Shift the app surface from a modulation demo toward the approved wellness-companion experience. This integrates session-led copy into the request flow, gives the UI named session outputs, and rewrites the examples and README so the product voice matches the intended audience.

Constraint: Task 3 must stay verifiable without a local gradio install
Constraint: Backend failures must not leave reassuring success copy on screen
Rejected: Pure copy polish without wiring the session fields into the output path | did not satisfy the redesign behavior
Confidence: medium
Scope-risk: moderate
Reversibility: clean
Directive: Keep the session-led outputs and failure-safe copy paths aligned if later tasks expand the UI further
Tested: python -m py_compile app.py tests/test_rhythma_ui_copy.py
Tested: PYTEST_DISABLE_PLUGIN_AUTOLOAD=1 python -c "import pytest; raise SystemExit(pytest.main(['tests/test_rhythma_ui_copy.py','-q','-p','no:cacheprovider']))"
Not-tested: Full live Gradio rendering in this environment because gradio is not installed locally

Files changed (3) hide show
  1. README.md +23 -7
  2. app.py +102 -49
  3. tests/test_rhythma_ui_copy.py +129 -0
README.md CHANGED
@@ -1,8 +1,8 @@
1
  ---
2
- title: 'Rhythma: The Living Modulation Engine'
3
- emoji:
4
  colorFrom: pink
5
- colorTo: purple
6
  sdk: gradio
7
  sdk_version: 6.10.0
8
  app_file: app.py
@@ -10,11 +10,16 @@ pinned: true
10
  license: apache-2.0
11
  thumbnail: >-
12
  https://cdn-uploads.huggingface.co/production/uploads/64628a722a83863b97beed5e/y4bWysnsPMV2asL_LFB9X.jpeg
13
- short_description: 🔊Reverse Active Denial System
14
  ---
15
 
16
  Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
17
 
 
 
 
 
 
18
  ## Local Development
19
 
20
  Install the dependencies, then run the Gradio app:
@@ -24,12 +29,23 @@ pip install -r requirements.txt
24
  python app.py
25
  ```
26
 
27
- Set `GROQ_API_KEY` to enable Groq-backed text classification and audio transcription.
 
 
 
 
 
 
 
 
 
 
28
 
29
  ## Project Structure
30
 
31
- - `app.py`: Gradio interface and request pipeline
32
  - `rhythma_engine.py`: waveform generation, audio export, and visualization
33
- - `rhythma_analysis.py`: text/audio analysis, optional Groq integration, and result shaping
34
  - `rhythma.py`: compatibility facade for the public classes
35
  - `tests/test_rhythma_regression.py`: regression tests for the core behavior
 
 
1
  ---
2
+ title: 'Rhythma'
3
+ emoji: 🎧
4
  colorFrom: pink
5
+ colorTo: orange
6
  sdk: gradio
7
  sdk_version: 6.10.0
8
  app_file: app.py
 
10
  license: apache-2.0
11
  thumbnail: >-
12
  https://cdn-uploads.huggingface.co/production/uploads/64628a722a83863b97beed5e/y4bWysnsPMV2asL_LFB9X.jpeg
13
+ short_description: An artful wellness companion for reflective listening sessions
14
  ---
15
 
16
  Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
17
 
18
+ ## Rhythma
19
+
20
+ Rhythma is an artful wellness companion that translates a felt state into a named listening session.
21
+ The app pairs session profiling, layered audio, and reflective copy so the experience feels closer to a guided ritual than a parameter dashboard.
22
+
23
  ## Local Development
24
 
25
  Install the dependencies, then run the Gradio app:
 
29
  python app.py
30
  ```
31
 
32
+ Set `GROQ_API_KEY` to enable Groq-backed text classification and voice-note transcription.
33
+
34
+ If `gradio` is not installed locally, you can still run the focused test suite for `app.py` through the stubbed UI tests in `tests/test_rhythma_ui_copy.py`.
35
+
36
+ ## Session-Led Experience
37
+
38
+ - The main prompt asks what you are carrying or what intention you want to hold.
39
+ - The output surface now leads with a named session and emotional tone before the listening guidance appears.
40
+ - `Tone Center` and `Session Pattern` can stay on automatic or be gently overridden from the session controls.
41
+ - The output area is framed around `Your Listening Path`, `Session Reflection`, audio playback, and waveform views.
42
+ - Example prompts now reflect a calmer, artful wellness voice rather than a technical modulation demo.
43
 
44
  ## Project Structure
45
 
46
+ - `app.py`: session-led Gradio interface and request pipeline
47
  - `rhythma_engine.py`: waveform generation, audio export, and visualization
48
+ - `rhythma_analysis.py`: text/audio analysis, optional Groq integration, and session-profile shaping
49
  - `rhythma.py`: compatibility facade for the public classes
50
  - `tests/test_rhythma_regression.py`: regression tests for the core behavior
51
+ - `tests/test_rhythma_ui_copy.py`: focused UI-copy tests that avoid a real Gradio import
app.py CHANGED
@@ -1,6 +1,9 @@
1
  import os
2
  import logging
3
- import gradio as gr
 
 
 
4
  import matplotlib
5
  matplotlib.use('Agg')
6
  import tempfile
@@ -101,6 +104,26 @@ def normalize_rhythm_override(value):
101
  return value
102
 
103
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
104
  def rhythma_experience(
105
  input_text, audio_input,
106
  override_freq=None,
@@ -111,6 +134,7 @@ def rhythma_experience(
111
  input_text = input_text.strip() if input_text else ""
112
  freq_override_value = coerce_frequency(override_freq)
113
  analysis = analyze_input(input_text, audio_input)
 
114
 
115
  analysis_text, audio_file, fig, waveform_image, symbolic = generate_modulated_experience(
116
  analysis,
@@ -122,76 +146,100 @@ def rhythma_experience(
122
 
123
  transcription = analysis.get("transcription", "") if isinstance(analysis, dict) else ""
124
  plot_output = fig if fig else None
125
- return analysis_text, audio_file, plot_output, waveform_image, symbolic, transcription
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
126
 
127
  # --- Create the Gradio Interface ---
128
  def create_interface():
129
- with gr.Blocks(theme=gr.themes.Soft(), title="Rhythma: The Living Modulation Engine") as demo:
130
- gr.Markdown("# Rhythma: The Living Modulation Engine")
131
- gr.Markdown("### Dynamic rhythm-based sound modulation for wellbeing from Vers3Dynamics")
 
 
 
132
 
133
  if not use_groq:
134
- gr.Warning("Running with limited functionality: GROQ_API_KEY not found. "
135
- "Advanced AI analysis and audio transcription are disabled.")
 
136
 
137
  with gr.Row():
138
  with gr.Column(scale=1):
139
- gr.Markdown("**1. Describe Your State or Intention**")
140
  input_text = gr.Textbox(
141
- label="How are you feeling, or what is your intention?",
142
- placeholder="e.g., 'feeling stressed about work', 'want to relax', 'need focus'...",
143
- lines=3
144
  )
145
 
146
- gr.Markdown("**Optional: Use Your Voice (Requires Groq API Key)**")
147
  audio_input = gr.Audio(
148
- sources=["microphone"], # Prioritize microphone
149
- type="filepath", # RhythmaSymphAICore expects a filepath
150
- label="Record or Upload Audio" if use_groq else "Audio Input (Disabled)",
151
- interactive=use_groq # Disable if Groq not available
152
  )
153
 
154
- with gr.Accordion("Advanced Settings (Optional Overrides)", open=False):
155
  override_freq = gr.Slider(
156
  minimum=0, maximum=1000, value=0, step=1,
157
- label="Override Frequency (Hz)",
158
- info="Leave at 0 to use automatic frequency based on analysis."
159
  )
160
  override_modulation = gr.Dropdown(
161
  choices=["sine", "pulse", "chirp"],
162
  value="sine",
163
- label="Override Modulation Type"
164
  )
165
  available_patterns = list(RhythmaModulationEngine().rhythm_configs.keys())
166
  override_rhythm = gr.Dropdown(
167
  choices=["Automatic"] + available_patterns,
168
  value="Automatic",
169
- label="Override Rhythm Pattern",
170
- info="Leave on Automatic to use the pattern inferred from the analysis."
171
  )
172
  duration = gr.Slider(
173
  minimum=3, maximum=60, value=10, step=1,
174
- label="Duration (seconds)"
175
  )
176
 
177
- generate_button = gr.Button("Generate Rhythma Experience", variant="primary", scale=2)
178
 
179
  with gr.Column(scale=2):
180
- gr.Markdown("**2. Experience Your Rhythma Soundscape**")
181
- analysis_output = gr.Textbox(label="Rhythma Analysis & Guidance", lines=8, interactive=False)
 
 
 
 
 
182
  with gr.Row():
183
- audio_output = gr.Audio(label="Modulated Audio", type="filepath", interactive=False)
184
- waveform_simple = gr.Image(label="Base Waveform", interactive=False, height=100, width=200)
185
- waveform_plot = gr.Plot(label="Detailed Waveform & Spectrogram")
186
- symbolic_output = gr.Textbox(label="Symbolic Interpretation", interactive=False)
187
- # Conditionally visible transcription output
188
  transcription_output = gr.Textbox(
189
- label="Transcribed Audio (If Provided)",
190
  interactive=False,
191
- visible=use_groq # Only show if Groq is potentially usable
192
  )
193
 
194
- # Define button action
195
  generate_button.click(
196
  fn=rhythma_experience,
197
  inputs=[
@@ -200,25 +248,28 @@ def create_interface():
200
  duration
201
  ],
202
  outputs=[
203
- analysis_output, audio_output,
204
- waveform_plot, waveform_simple, symbolic_output,
205
  transcription_output
206
  ]
207
  )
208
 
209
- # Add Examples
210
  gr.Examples(
211
  examples=[
212
- ["I'm feeling anxious about my upcoming presentation.", None, 0, "sine", "Automatic", 10],
213
- ["I feel grounded and peaceful today.", None, 0, "sine", "Automatic", 15],
214
- ["I need to focus on deep work without distractions.", None, 0, "sine", "focused", 20],
215
- ["I'm overwhelmed and need something steady.", None, 0, "sine", "Automatic", 10],
216
- ["I'm excited and want a more energized soundscape.", None, 0, "pulse", "active", 10],
217
- ["I want to relax after a long day.", None, 0, "sine", "relaxed", 30],
218
- ["I'm feeling low and want something gentle.", None, 0, "sine", "Automatic", 15],
219
  ],
220
  inputs=[input_text, audio_input, override_freq, override_modulation, override_rhythm, duration],
221
- outputs=[analysis_output, audio_output, waveform_plot, waveform_simple, symbolic_output, transcription_output],
 
 
 
 
222
  fn=rhythma_experience,
223
  cache_examples=False
224
  )
@@ -226,10 +277,10 @@ def create_interface():
226
  gr.Markdown("---")
227
  gr.Markdown("""
228
  ## About Rhythma
229
- Rhythma creates personalized soundscapes using frequency modulation based on your described emotional state or intention.
230
- It leverages AI analysis (enhanced with Groq if available) and principles of rhythmic sound design.
231
- **Note:** This is an experimental tool. The frequencies and interpretations are based on various theories and are not medical advice.
232
- © 2025 Vers3Dynamics
233
  """)
234
 
235
  return demo
@@ -238,6 +289,8 @@ def create_interface():
238
  if __name__ == "__main__":
239
  if symphai_core is None:
240
  LOGGER.error("Cannot launch Gradio app because RhythmaSymphAICore failed to initialize.")
 
 
241
  else:
242
  app_demo = create_interface()
243
  app_demo.launch()
 
1
  import os
2
  import logging
3
+ try:
4
+ import gradio as gr
5
+ except ModuleNotFoundError: # pragma: no cover - local test environments may omit gradio
6
+ gr = None
7
  import matplotlib
8
  matplotlib.use('Agg')
9
  import tempfile
 
104
  return value
105
 
106
 
107
+ def build_session_copy(analysis_result):
108
+ analysis_result = analysis_result if isinstance(analysis_result, dict) else {}
109
+ profile = analysis_result.get("session_profile") or {}
110
+ emotional_state = analysis_result.get("emotional_state", "neutral")
111
+
112
+ session_name = profile.get("title") or f"{emotional_state.title()} Session"
113
+ emotional_tone = profile.get("emotional_tone") or "Measured and receptive"
114
+ guidance = profile.get("guidance") or "Stay with the pulse until your breath settles into its own pace."
115
+ reflection = profile.get("reflection") or "This session offers a gentle reset without pushing for intensity."
116
+
117
+ return {
118
+ "session_name": session_name,
119
+ "emotional_tone": emotional_tone,
120
+ "listening_path": f"{emotional_tone}. {guidance}",
121
+ "session_reflection": reflection,
122
+ "tone_center": profile.get("tone_center"),
123
+ "session_pattern": profile.get("pattern"),
124
+ }
125
+
126
+
127
  def rhythma_experience(
128
  input_text, audio_input,
129
  override_freq=None,
 
134
  input_text = input_text.strip() if input_text else ""
135
  freq_override_value = coerce_frequency(override_freq)
136
  analysis = analyze_input(input_text, audio_input)
137
+ session_copy = build_session_copy(analysis)
138
 
139
  analysis_text, audio_file, fig, waveform_image, symbolic = generate_modulated_experience(
140
  analysis,
 
146
 
147
  transcription = analysis.get("transcription", "") if isinstance(analysis, dict) else ""
148
  plot_output = fig if fig else None
149
+ listening_path = session_copy["listening_path"]
150
+ session_reflection = session_copy["session_reflection"]
151
+
152
+ if audio_file is None and isinstance(analysis_text, str) and analysis_text:
153
+ fallback_message = analysis_text or symbolic or "Session generation is unavailable right now."
154
+ listening_path = fallback_message
155
+ session_reflection = fallback_message
156
+
157
+ return (
158
+ f"### {session_copy['session_name']}",
159
+ session_copy["emotional_tone"],
160
+ listening_path,
161
+ audio_file,
162
+ plot_output,
163
+ waveform_image,
164
+ session_reflection,
165
+ transcription,
166
+ )
167
 
168
  # --- Create the Gradio Interface ---
169
  def create_interface():
170
+ if gr is None:
171
+ raise ModuleNotFoundError("gradio is required to create the Rhythma interface.")
172
+
173
+ with gr.Blocks(theme=gr.themes.Soft(primary_hue="rose", secondary_hue="stone"), title="Rhythma") as demo:
174
+ gr.Markdown("# Rhythma")
175
+ gr.Markdown("### An artful wellness companion for reflective listening.")
176
 
177
  if not use_groq:
178
+ gr.Warning(
179
+ "Groq analysis is unavailable. Text-led sessions still work, but live voice transcription is off."
180
+ )
181
 
182
  with gr.Row():
183
  with gr.Column(scale=1):
184
+ gr.Markdown("**1. Share what you're carrying**")
185
  input_text = gr.Textbox(
186
+ label="How are you feeling, or what intention would you like to hold?",
187
+ placeholder="e.g., 'I need something steady before a conversation' or 'I want room to soften after a long day.'",
188
+ lines=4
189
  )
190
 
191
+ gr.Markdown("**Optional: add a voice note (requires Groq)**")
192
  audio_input = gr.Audio(
193
+ sources=["microphone"],
194
+ type="filepath",
195
+ label="Record or Upload a Voice Note" if use_groq else "Voice Note (Disabled)",
196
+ interactive=use_groq
197
  )
198
 
199
+ with gr.Accordion("Session shaping controls", open=False):
200
  override_freq = gr.Slider(
201
  minimum=0, maximum=1000, value=0, step=1,
202
+ label="Tone Center (Hz)",
203
+ info="Leave at 0 to let Rhythma choose a tone center from your session profile."
204
  )
205
  override_modulation = gr.Dropdown(
206
  choices=["sine", "pulse", "chirp"],
207
  value="sine",
208
+ label="Texture Shape"
209
  )
210
  available_patterns = list(RhythmaModulationEngine().rhythm_configs.keys())
211
  override_rhythm = gr.Dropdown(
212
  choices=["Automatic"] + available_patterns,
213
  value="Automatic",
214
+ label="Session Pattern",
215
+ info="Leave on Automatic to follow the pattern inferred from your session profile."
216
  )
217
  duration = gr.Slider(
218
  minimum=3, maximum=60, value=10, step=1,
219
+ label="Session Length (seconds)"
220
  )
221
 
222
+ generate_button = gr.Button("Generate Session", variant="primary", scale=2)
223
 
224
  with gr.Column(scale=2):
225
+ gr.Markdown("**2. Receive your listening session**")
226
+ gr.Markdown(
227
+ "_Rhythma shapes a named listening path, then renders the audio, reflection, and waveform around it._"
228
+ )
229
+ session_name_output = gr.Markdown("### Session")
230
+ emotional_tone_output = gr.Markdown("Measured and receptive")
231
+ listening_path_output = gr.Textbox(label="Your Listening Path", lines=6, interactive=False)
232
  with gr.Row():
233
+ audio_output = gr.Audio(label="Session Audio", type="filepath", interactive=False)
234
+ waveform_simple = gr.Image(label="Tone Center", interactive=False, height=100, width=200)
235
+ waveform_plot = gr.Plot(label="Session Pattern")
236
+ symbolic_output = gr.Textbox(label="Session Reflection", interactive=False)
 
237
  transcription_output = gr.Textbox(
238
+ label="Transcribed Voice Note",
239
  interactive=False,
240
+ visible=use_groq
241
  )
242
 
 
243
  generate_button.click(
244
  fn=rhythma_experience,
245
  inputs=[
 
248
  duration
249
  ],
250
  outputs=[
251
+ session_name_output, emotional_tone_output, listening_path_output,
252
+ audio_output, waveform_plot, waveform_simple, symbolic_output,
253
  transcription_output
254
  ]
255
  )
256
 
 
257
  gr.Examples(
258
  examples=[
259
+ ["I need something steady before a difficult conversation.", None, 0, "sine", "Automatic", 12],
260
+ ["I want to feel grounded and open as the evening slows down.", None, 0, "sine", "Automatic", 18],
261
+ ["I need a clear horizon for deep work.", None, 0, "sine", "focused", 20],
262
+ ["Everything feels loud and I want a softer landing.", None, 0, "sine", "Automatic", 14],
263
+ ["I feel bright and want a livelier pulse without losing calm.", None, 0, "pulse", "active", 10],
264
+ ["Give me a long unwind after a heavy day.", None, 0, "sine", "relaxed", 30],
265
+ ["I want a gentle session for a low-energy morning.", None, 0, "sine", "Automatic", 16],
266
  ],
267
  inputs=[input_text, audio_input, override_freq, override_modulation, override_rhythm, duration],
268
+ outputs=[
269
+ session_name_output, emotional_tone_output, listening_path_output,
270
+ audio_output, waveform_plot, waveform_simple, symbolic_output,
271
+ transcription_output
272
+ ],
273
  fn=rhythma_experience,
274
  cache_examples=False
275
  )
 
277
  gr.Markdown("---")
278
  gr.Markdown("""
279
  ## About Rhythma
280
+ Rhythma is an artful wellness companion that turns a felt state into a reflective listening session.
281
+ It uses optional AI analysis, session profiling, and rhythmic sound design to shape a tone center, pattern, and guided path for the moment you are in.
282
+ **Note:** Rhythma is for reflective listening and personal wellbeing rituals. It is not medical advice or a clinical treatment.
283
+ © 2025 Vers3Dynamics
284
  """)
285
 
286
  return demo
 
289
  if __name__ == "__main__":
290
  if symphai_core is None:
291
  LOGGER.error("Cannot launch Gradio app because RhythmaSymphAICore failed to initialize.")
292
+ elif gr is None:
293
+ LOGGER.error("Cannot launch Gradio app because gradio is not installed.")
294
  else:
295
  app_demo = create_interface()
296
  app_demo.launch()
tests/test_rhythma_ui_copy.py ADDED
@@ -0,0 +1,129 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import importlib
2
+ import sys
3
+ import types
4
+ from unittest import mock
5
+
6
+ import pytest
7
+
8
+
9
+ def import_app_with_gradio_stub():
10
+ sys.modules.pop("app", None)
11
+ sys.modules["gradio"] = types.ModuleType("gradio")
12
+ return importlib.import_module("app")
13
+
14
+
15
+ def import_app_without_gradio():
16
+ original_import = __import__
17
+
18
+ def blocked_import(name, globals=None, locals=None, fromlist=(), level=0):
19
+ if name == "gradio":
20
+ raise ModuleNotFoundError("No module named 'gradio'")
21
+ return original_import(name, globals, locals, fromlist, level)
22
+
23
+ sys.modules.pop("app", None)
24
+ sys.modules.pop("gradio", None)
25
+ with mock.patch("builtins.__import__", side_effect=blocked_import):
26
+ return importlib.import_module("app")
27
+
28
+
29
+ def test_normalize_rhythm_override_treats_automatic_as_none():
30
+ app = import_app_with_gradio_stub()
31
+
32
+ assert app.normalize_rhythm_override("Automatic") is None
33
+
34
+
35
+ def test_build_session_copy_uses_human_labels():
36
+ app = import_app_with_gradio_stub()
37
+ analysis = {
38
+ "emotional_state": "anxious",
39
+ "session_profile": {
40
+ "title": "Grounding Tide",
41
+ "emotional_tone": "Settling and steady",
42
+ "guidance": "Let your breath fall behind the pulse until the session feels steady.",
43
+ "reflection": "This session favors stability over intensity.",
44
+ },
45
+ }
46
+
47
+ card = app.build_session_copy(analysis)
48
+
49
+ assert card["session_name"] == "Grounding Tide"
50
+ assert card["listening_path"].startswith("Settling and steady")
51
+ assert "stability over intensity" in card["session_reflection"]
52
+
53
+
54
+ def test_import_without_gradio_uses_interface_guard():
55
+ app = import_app_without_gradio()
56
+
57
+ assert app.gr is None
58
+ with pytest.raises(ModuleNotFoundError, match="gradio is required"):
59
+ app.create_interface()
60
+
61
+
62
+ def test_rhythma_experience_returns_session_led_copy(monkeypatch):
63
+ app = import_app_with_gradio_stub()
64
+ analysis = {
65
+ "emotional_state": "anxious",
66
+ "transcription": "",
67
+ "session_profile": {
68
+ "title": "Grounding Tide",
69
+ "emotional_tone": "Settling and steady",
70
+ "guidance": "Let your breath fall behind the pulse until the session feels steady.",
71
+ "reflection": "This session favors stability over intensity.",
72
+ },
73
+ }
74
+
75
+ monkeypatch.setattr(app, "analyze_input", lambda text, audio: analysis)
76
+ monkeypatch.setattr(
77
+ app,
78
+ "generate_modulated_experience",
79
+ lambda *args, **kwargs: ("legacy analysis", "session.wav", "plot", "image", "legacy symbolic"),
80
+ )
81
+
82
+ outputs = app.rhythma_experience(
83
+ "I feel anxious and need to settle down",
84
+ None,
85
+ override_freq=0,
86
+ override_modulation="sine",
87
+ override_rhythm="Automatic",
88
+ duration=5,
89
+ )
90
+
91
+ assert outputs[0] == "### Grounding Tide"
92
+ assert outputs[1] == "Settling and steady"
93
+ assert outputs[2].startswith("Settling and steady")
94
+ assert outputs[3] == "session.wav"
95
+ assert outputs[6] == "This session favors stability over intensity."
96
+ assert outputs[7] == ""
97
+
98
+
99
+ def test_rhythma_experience_degrades_copy_consistently_on_generation_failure(monkeypatch):
100
+ app = import_app_with_gradio_stub()
101
+ analysis = {
102
+ "emotional_state": "anxious",
103
+ "transcription": "",
104
+ "session_profile": {
105
+ "title": "Grounding Tide",
106
+ "emotional_tone": "Settling and steady",
107
+ "guidance": "Let your breath fall behind the pulse until the session feels steady.",
108
+ "reflection": "This session favors stability over intensity.",
109
+ },
110
+ }
111
+
112
+ monkeypatch.setattr(app, "analyze_input", lambda text, audio: analysis)
113
+ monkeypatch.setattr(
114
+ app,
115
+ "generate_modulated_experience",
116
+ lambda *args, **kwargs: ("Generation unavailable", None, None, None, "legacy symbolic"),
117
+ )
118
+
119
+ outputs = app.rhythma_experience(
120
+ "I feel anxious and need to settle down",
121
+ None,
122
+ override_freq=0,
123
+ override_modulation="sine",
124
+ override_rhythm="Automatic",
125
+ duration=5,
126
+ )
127
+
128
+ assert outputs[2] == "Generation unavailable"
129
+ assert outputs[6] == "Generation unavailable"