Skip to content

Commit 5741964

Browse files
988231: Added UG content and code snippet for Chat UI integration with STT.
1 parent 54d6ec3 commit 5741964

File tree

8 files changed

+438
-0
lines changed

8 files changed

+438
-0
lines changed
Lines changed: 55 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,55 @@
1+
---
2+
layout: post
3+
title: Speech-to-Text With ##Platform_Name## Chat UI Control | Syncfusion
4+
description: Checkout and learn about configuration of Speech-to-Text with Azure OpenAI in ##Platform_Name## Chat UI control of Syncfusion Essential JS 2 and more.
5+
platform: ej2-asp-core-mvc
6+
control: Azure Open AI
7+
publishingplatform: ##Platform_Name##
8+
documentation: ug
9+
---
10+
11+
# Speech-to-Text in ASP.NET MVC Chat UI
12+
13+
The Syncfusion ASP.NET MVC Chat UI control integrates `Speech-to-Text` functionality through the browser's [Web Speech API](https://developer.mozilla.org/en-US/docs/Web/API/Web_Speech_API). This enables the conversion of spoken words into text using the device's microphone, allowing users to interact with the Chat UI through voice input.
14+
15+
## Prerequisites
16+
17+
Before integrating `Speech-to-Text`, install the [Syncfusion.EJ2.MVC5](https://www.nuget.org/packages/Syncfusion.EJ2.MVC5) NuGet package to use ASP.NET MVC controls in the application.
18+
19+
## Set Up the Chat UI control
20+
21+
Follow the Syncfusion Chat UI [Getting Started](./getting-started) guide to configure and render the Chat UI control in the application.
22+
23+
## Configure Speech-to-Text
24+
25+
To enable Speech-to-Text functionality in the Angular Chat UI control, update the `index.cshtml` file to incorporate the Web Speech API.
26+
27+
The [SpeechToText](https://ej2.syncfusion.com/aspnetmvc/documentation/speech-to-text/getting-started) control listens to audio input from the device’s microphone, transcribes spoken words into text, and updates the Chat UI’s editable footer with the recognized text. Once the transcription appears in the footer, users can send it as a message to others.
28+
29+
### Configuration Options
30+
31+
* **[`Lang`](https://help.syncfusion.com/cr/aspnetmvc-js2/Syncfusion.EJ2.Inputs.SpeechToText.html#Syncfusion_EJ2_Inputs_SpeechToText_Lang)**: Specifies the language for speech recognition. For example:
32+
33+
* `en-US` for American English
34+
* `fr-FR` for French
35+
36+
* **[`AllowInterimResults`](https://help.syncfusion.com/cr/aspnetmvc-js2/Syncfusion.EJ2.Inputs.SpeechToText.html#Syncfusion_EJ2_Inputs_SpeechToText_AllowInterimResults)**: Set to `true` to receive real-time (interim) recognition results, or `false` to receive only final results.
37+
38+
{% tabs %}
39+
{% highlight razor tabtitle="CSHTML" %}
40+
{% include code-snippet/chatui/stt/razor %}
41+
{% endhighlight %}
42+
{% highlight c# tabtitle="SpeechToText.cs" %}
43+
{% include code-snippet/chatui/stt/speechtotext.cs %}
44+
{% endhighlight %}
45+
{% endtabs %}
46+
47+
![Integrating Speech-to-Text with Chat UI](images/chatui-stt.png)
48+
49+
## Error Handling
50+
51+
The `SpeechToText` control provides events to handle errors that may occur during speech recognition. For more information, refer to the [Error Handling](https://ej2.syncfusion.com/aspnetmvc/documentation/speech-to-text/speech-recognition#error-handling ) section in the documentation.
52+
53+
## Browser Compatibility
54+
55+
The `SpeechToText` control relies on the [Speech Recognition API](https://ej2.syncfusion.com/aspnetmvc/documentation/speech-to-text/speech-recognition#browser-support), which has limited browser support. Refer to the [Browser Compatibility](https://ej2.syncfusion.com/aspnetmvc/documentation/speech-to-text/speech-recognition#browser-support) section for detailed information.
Lines changed: 55 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,55 @@
1+
---
2+
layout: post
3+
title: Speech-to-Text With ##Platform_Name## Chat UI Control | Syncfusion
4+
description: Checkout and learn about configuration of Speech-to-Text with Azure OpenAI in ##Platform_Name## Chat UI control of Syncfusion Essential JS 2 and more.
5+
platform: ej2-asp-core-mvc
6+
control: Azure Open AI
7+
publishingplatform: ##Platform_Name##
8+
documentation: ug
9+
---
10+
11+
# Speech-to-Text in ASP.NET Core Chat UI
12+
13+
The Syncfusion ASP.NET Core Chat UI control integrates `Speech-to-Text` functionality through the browser's [Web Speech API](https://developer.mozilla.org/en-US/docs/Web/API/Web_Speech_API). This enables the conversion of spoken words into text using the device's microphone, allowing users to interact with the Chat UI through voice input.
14+
15+
## Prerequisites
16+
17+
Before integrating `Speech-to-Text`, install the [Syncfusion.EJ2.AspNet.Core](https://www.nuget.org/packages/Syncfusion.EJ2.AspNet.Core) NuGet package to use ASP.NET Core controls in the application.
18+
19+
## Set Up the Chat UI control
20+
21+
Follow the Syncfusion Chat UI [Getting Started](./getting-started) guide to configure and render the Chat UI control in the application.
22+
23+
## Configure Speech-to-Text
24+
25+
To enable Speech-to-Text functionality in the Angular Chat UI control, update the `index.cshtml` file to incorporate the Web Speech API.
26+
27+
The [SpeechToText](https://ej2.syncfusion.com/aspnetcore/documentation/speech-to-text/getting-started) control listens to audio input from the device’s microphone, transcribes spoken words into text, and updates the Chat UI’s editable footer with the recognized text. Once the transcription appears in the footer, users can send it as a message to others.
28+
29+
### Configuration Options
30+
31+
* **[`lang`](https://help.syncfusion.com/cr/aspnetcore-js2/Syncfusion.EJ2.Inputs.SpeechToText.html#Syncfusion_EJ2_Inputs_SpeechToText_Lang)**: Specifies the language for speech recognition. For example:
32+
33+
* `en-US` for American English
34+
* `fr-FR` for French
35+
36+
* **[`allowInterimResults`](https://help.syncfusion.com/cr/aspnetcore-js2/Syncfusion.EJ2.Inputs.SpeechToText.html#Syncfusion_EJ2_Inputs_SpeechToText_AllowInterimResults)**: Set to `true` to receive real-time (interim) recognition results, or `false` to receive only final results.
37+
38+
{% tabs %}
39+
{% highlight razor tabtitle="CSHTML" %}
40+
{% include code-snippet/chatui/stt/tagHelper %}
41+
{% endhighlight %}
42+
{% highlight c# tabtitle="Gemini.cs" %}
43+
{% include code-snippet/chatui/stt/speechtotext.cs %}
44+
{% endhighlight %}
45+
{% endtabs %}
46+
47+
![Integrating Speech-to-Text with Chat UI](images/chatui-stt.png)
48+
49+
## Error Handling
50+
51+
The `SpeechToText` control provides events to handle errors that may occur during speech recognition. For more information, refer to the [Error Handling](https://ej2.syncfusion.com/aspnetcore/documentation/speech-to-text/speech-recognition#error-handling) section in the documentation.
52+
53+
## Browser Compatibility
54+
55+
The `SpeechToText` control relies on the [Speech Recognition API](https://developer.mozilla.org/en-US/docs/Web/API/SpeechRecognition), which has limited browser support. Refer to the [Browser Compatibility](https://ej2.syncfusion.com/aspnetcore/documentation/speech-to-text/speech-recognition#browser-support) section for detailed information.
22.6 KB
Loading
Lines changed: 144 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,144 @@
1+
@using Syncfusion.EJ2.InteractiveChat;
2+
@using Newtonsoft.Json;
3+
4+
<div class="integration-speechtotext">
5+
@Html.EJS().ChatUI("chatui").Created("onCreate").FooterTemplate("#footerContent").Messages(ViewBag.ChatMessagesData).User(ViewBag.CurrentUser).Render()
6+
</div>
7+
8+
<script>
9+
var chatuiObj;
10+
var chatuiFooter;
11+
var sendButton;
12+
var speechToTextObj;
13+
14+
function onCreate() {
15+
chatuiObj = ej.base.getComponent(document.getElementById("chatui"), "chat-ui");
16+
// Initialize Speech-to-Text component
17+
speechToTextObj = new ej.inputs.SpeechToText({
18+
transcriptChanged: onTranscriptChange,
19+
onStop: onListeningStop,
20+
created: onCreated,
21+
cssClass: 'e-flat'
22+
});
23+
speechToTextObj.appendTo('#speechToText');
24+
}
25+
26+
// Updates transcript in the input area when speech-to-text transcribes
27+
function onTranscriptChange(args) {
28+
document.querySelector('#chatui-footer').innerText = args.transcript;
29+
}
30+
31+
// Handles actions when speech listening stops
32+
function onListeningStop() {
33+
toggleButtons();
34+
}
35+
36+
// Handles actions after component creation
37+
function onCreated() {
38+
chatuiFooter = document.querySelector('#chatui-footer');
39+
sendButton = document.querySelector('#chatui-sendButton');
40+
sendButton.addEventListener('click', sendIconClicked);
41+
chatuiFooter.addEventListener('input', toggleButtons);
42+
43+
chatuiFooter.addEventListener('keydown', function (e) {
44+
if (e.key === 'Enter' && !e.shiftKey) {
45+
sendIconClicked();
46+
e.preventDefault();
47+
}
48+
});
49+
toggleButtons();
50+
}
51+
52+
// Toggles the visibility of the send and speech-to-text buttons
53+
function toggleButtons() {
54+
var hasText = chatuiFooter.innerText.trim() !== '';
55+
sendButton.classList.toggle('visible', hasText);
56+
speechToTextObj.element.classList.toggle('visible', !hasText);
57+
if (!hasText && (chatuiFooter.innerHTML === '<br>' || !chatuiFooter.innerHTML.trim())) {
58+
chatuiFooter.innerHTML = '';
59+
}
60+
}
61+
62+
// Handles send button click event
63+
function sendIconClicked() {
64+
const messageContent = chatuiFooter.innerText;
65+
if (messageContent.trim()) {
66+
chatuiObj.addMessage({
67+
author: @Html.Raw(JsonConvert.SerializeObject(ViewBag.CurrentUser)),
68+
text: messageContent
69+
});
70+
chatuiFooter.innerHTML = '';
71+
toggleButtons();
72+
}
73+
}
74+
</script>
75+
76+
<script id="footerContent" type="text/x-jsrender">
77+
<div class="e-footer-wrapper">
78+
<div id="chatui-footer" class="content-editor" oninput="toggleButtons" contenteditable="true" placeholder="Click to speak or start typing..."></div>
79+
<div class="option-container">
80+
<button id="speechToText"></button>
81+
<button id="chatui-sendButton" class="e-assist-send e-icons" role="button"></button>
82+
</div>
83+
</div>
84+
</script>
85+
86+
<style>
87+
.integration-speechtotext {
88+
height: 400px;
89+
width: 450px;
90+
margin: 0 auto;
91+
}
92+
93+
.integration-speechtotext #chatui-sendButton {
94+
width: 40px;
95+
height: 40px;
96+
font-size: 15px;
97+
border: none;
98+
background: none;
99+
cursor: pointer;
100+
}
101+
102+
.integration-speechtotext #speechToText.visible,
103+
.integration-speechtotext #chatui-sendButton.visible {
104+
display: inline-block;
105+
}
106+
107+
.integration-speechtotext #speechToText,
108+
.integration-speechtotext #chatui-sendButton {
109+
display: none;
110+
}
111+
112+
@@media only screen and (max-width: 750px) {
113+
.integration-speechtotext {
114+
width: 100%;
115+
}
116+
}
117+
118+
.integration-speechtotext .e-footer-wrapper {
119+
display: flex;
120+
border: 1px solid #c1c1c1;
121+
margin: 5px 5px 0 5px;
122+
border-radius: 10px;
123+
padding: 5px;
124+
}
125+
126+
.integration-speechtotext .content-editor {
127+
width: 100%;
128+
overflow-y: auto;
129+
font-size: 14px;
130+
min-height: 20px;
131+
max-height: 150px;
132+
padding: 10px;
133+
}
134+
135+
.integration-speechtotext .content-editor[contentEditable='true']:empty:before {
136+
content: attr(placeholder);
137+
color: #6b7280;
138+
font-style: italic;
139+
}
140+
141+
.integration-speechtotext .option-container {
142+
align-self: flex-end;
143+
}
144+
</style>
Lines changed: 30 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,30 @@
1+
using Syncfusion.EJ2.InteractiveChat;
2+
3+
public ChatUIUser CurrentUser { get; set; }
4+
public List<ChatUIMessage> ChatMessagesData { get; set; } = new List<ChatUIMessage>();
5+
public ChatUIUser CurrentUserModel { get; set; } = new ChatUIUser() { Id = "user1", User = "Albert" };
6+
public ChatUIUser MichaleUserModel { get; set; } = new ChatUIUser() { Id = "user2", User = "Michale Suyama" };
7+
8+
public ActionResult SpeechToText()
9+
{
10+
CurrentUser = CurrentUserModel;
11+
ChatMessagesData.Add(new ChatUIMessage()
12+
{
13+
Text = "Hi Michale, are we on track for the deadline?",
14+
Author = CurrentUserModel
15+
});
16+
ChatMessagesData.Add(new ChatUIMessage()
17+
{
18+
Text = "Yes, the design phase is complete.",
19+
Author = MichaleUserModel
20+
});
21+
ChatMessagesData.Add(new ChatUIMessage()
22+
{
23+
Text = "I’ll review it and send feedback by today.",
24+
Author = CurrentUserModel
25+
});
26+
ViewBag.ChatMessagesData = ChatMessagesData;
27+
ViewBag.CurrentUser = CurrentUser;
28+
ViewBag.MichaleUser = MichaleUserModel;
29+
return View();
30+
}

0 commit comments

Comments
 (0)