avs-device-sdk/srcs/Captions/Implementation/test/LibwebvttParserAdapterTest.cpp

310 lines
11 KiB
C++
Raw Permalink Normal View History

Version 1.17.0 alexa-client-sdk Changes in this update: **Enhancements** - Added support for [captions for TTS](https://developer.amazon.com/docs/avs-device-sdk/features.html#captions). This enhancement allows you to print onscreen captions for Alexa voice responses. - Added support for [SpeechSynthesizer Interface 1.3](https://developer.amazon.com/docs/alexa-voice-service/speechsynthesizer.html). This interface supports the new `captions` parameter. - Added support for [AudioPlayer Interface 1.3](https://developer.amazon.com/docs/alexa-voice-service/audioplayer.html). This interface supports the new `captions` parameter. - Added support for [Interaction Model 1.2](https://developer.amazon.com/docs/alexa-voice-service/interactionmodel-interface.html). - Added support for [System 2.0](https://developer.amazon.com/en-US/docs/alexa/alexa-voice-service/system.html). - Added support for Alarm Volume Ramp. This feature lets you to fade in alarms for a more pleasant experience. You enable alarm volume ramp in the sample app through the settings menu. - Added support for using certified senders for URI path extensions. This change allows you to specify the URI path extension when sending messages with [`CertifiedSender::sendJSONMessage`](https://alexa.github.io/avs-device-sdk/classalexa_client_s_d_k_1_1certified_sender_1_1_certified_sender.html#a4c0706d79717b226ba77d1a9c3280fe6). - Added new [`Metrics`](https://alexa.github.io/avs-device-sdk/classalexa_client_s_d_k_1_1avs_common_1_1utils_1_1_metrics.html) interfaces and helper classes. These additions help you create and consume [`Metrics`](https://alexa.github.io/avs-device-sdk/classalexa_client_s_d_k_1_1avs_common_1_1utils_1_1_metrics.html) events. - **Interfaces** - `MetricRecorderInterface`, `MetricSinkInterface`. - **Helper Classes** - `DataPointStringBuilder`, `DataPointCounterBuilder`, `DataPointDurationBuilder`, `MetricEventBuilder`. - Added support for the following AVS [endpoint](../avs-device-sdk/endpoints.html) controller capabilities: - [Alexa.ModeController](https://developer.amazon.com/docs/alexa-voice-service/alexa-modecontroller.html) - [Alexa.RangeController](https://developer.amazon.com/docs/alexa-voice-service/alexa-rangecontroller.html) - [Alexa.PowerController](https://developer.amazon.com/docs/alexa-voice-service/alexa-powercontroller.html) - [Alexa.ToggleController](https://developer.amazon.com/docs/alexa-voice-service/alexa-togglecontroller.html) - Added `PowerResourceManagerInterface`. This interface allows the SDK to control power resource levels for components such as the `AudioInputProcessor` and `SpeechSynthesizer`. - Added `AlexaInterfaceCapabilityAgent`. This Capability Agent handles common directives and endpoint controller capabilities support by [`Alexa.AlexaInterface`](../alexa-voice-service/alexa.html). - Added `AlexaInterfaceMessageSenderInterface`. This interface is required to send common events defined by the `Alexa.AlexaInterface` interface. - Added `BufferingComplete` to [`MediaPlayerObserverInterface`](https://alexa.github.io/avs-device-sdk/classalexa_client_s_d_k_1_1avs_common_1_1utils_1_1media_player_1_1_media_player_observer_interface.html). This method helps improve performance in poor networking conditions by making sure `MediaPlayer` pre-buffers correctly. - Added `SendDTMF` to `CallManagerInterface`. This method allows you to send DTMF tones during calls. **New build options** - CAPTIONS - **ADDED** [`CAPTIONS`](https://developer.amazon.com/docs/avs-device-sdk/cmake-parameters.html#captions) - **ADDED** [`LIBWEBVTT_LIB_PATH`](https://developer.amazon.com/docs/avs-device-sdk/cmake-parameters.html#captions) - **ADDED** [`LIBWEBVTT_INCLUDE_DIR`](https://developer.amazon.com/docs//avs-device-sdk/cmake-parameters.html#captions) - METRICS - **ADDED** [`METRICS`](https://developer.amazon.com/docs//avs-device-sdk/cmake-parameters.html#metrics) - ENDPONTS - **ADDED** [`ENABLE_ALL_ENDPOINT_CONTROLLERS`](https://developer.amazon.com/docs/avs-device-sdk/cmake-parameters.html#endpoints) - **ADDED** [`ENABLE_POWER_CONTROLLER`](https://developer.amazon.com/docs/avs-device-sdk/cmake-parameters.html#endpoints) - **ADDED** [`ENABLE_TOGGLE_CONTROLLER`](https://developer.amazon.com/docs/avs-device-sdk/cmake-parameters.html#endpoints) - **ADDED** [`ENABLE_RANGE_CONTROLLER`](https://developer.amazon.com/docs/avs-device-sdk/cmake-parameters.html#endpoints) - **ADDED** [`ENABLE_MODE_CONTROLLER`](https://developer.amazon.com/docs/avs-device-sdk/cmake-parameters.html#endpoints) **New dependencies** - To use captions, you must install a [new dependency](https://developer.amazon.com/docs/avs-device-sdk/dependencies) – the [libwebvtt parsing library](https://github.com/alexa/webvtt). Webvtt is a C/C++ library for interpreting and authoring conformant WebVTT content. WebVTT is a caption and subtitle format designed for use with HTML5 audio and video elements. **Bug fixes** - Fixed [`MimeResponseSink::onReceiveNonMimeData`](https://alexa.github.io/avs-device-sdk/classalexa_client_s_d_k_1_1acl_1_1_mime_response_sink.html) [data issue](https://github.com/alexa/avs-device-sdk/issues/1519) that returned invalid data. - Fixed [data type issue](https://github.com/alexa/avs-device-sdk/issues/1519) that incorrectly used `finalResponseCode` instead of [`FinalResponseCodeId`](https://github.com/alexa/avs-device-sdk/blob/master/AVSCommon/Utils/src/LibcurlUtils/LibCurlHttpContentFetcher.cpp#L370). - Fixed [`UrlContentToAttachmentConverter`](https://alexa.github.io/avs-device-sdk/classalexa_client_s_d_k_1_1playlist_parser_1_1_url_content_to_attachment_converter.html) issue that used the incorrect range parameter. - Fixed `FinallyGuard` [linking issue](https://github.com/alexa/avs-device-sdk/issues/1517) that caused problems compiling the SDK on iOS. - Fixed a [Bluetooth Capability Agent](https://alexa.github.io/avs-device-sdk/classalexa_client_s_d_k_1_1capability_agents_1_1bluetooth_1_1_bluetooth.html) bug that prevented devices from initializing. **Known Issues** * The WebVTT dependency required for `captions` isn't supported for Windows/Android. * Music playback history isn't displayed in the Alexa app for certain account and device types. * When using Gnu Compiler Collection 8+ (GCC 8+), `-Wclass-memaccess` triggers warnings. You can ignore these, they don't cause the build to fail. * Android error `libDefaultClient.so not found` might occur. Resolve this by upgrading to ADB version 1.0.40. * If a device loses a network connection, the lost connection status isn't returned via local TTS. * ACL encounters issues if it receives audio attachments but doesn't consume them. * `SpeechSynthesizerState` uses `GAINING_FOCUS` and `LOSING_FOCUS` as a workaround for handling intermediate states. * Media steamed through Bluetooth might abruptly stop. To restart playback, resume the media in the source application or toggle next/previous. * If a connected Bluetooth device is inactive, the Alexa app might indicates that audio is playing. * The Bluetooth agent assumes that the Bluetooth adapter is always connected to a power source. Disconnecting from a power source during operation isn't yet supported. * When using some products, interrupted Bluetooth playback might not resume if other content is locally streamed. * `make integration` isn't available for Android. To run Android integration tests, manually upload the test binary and input file and run ADB. * Alexa might truncate the beginning of speech when responding to text-to-speech (TTS) user events. This only impacts Raspberry Pi devices running Android Things with HDMI output audio. * A reminder TTS message doesn't play if the sample app restarts and loses a network connection. Instead, the default alarm tone plays twice. * `ServerDisconnectIntegratonTest` tests are disabled until they are updated to reflect new service behavior. * Bluetooth initialization must complete before connecting devices, otherwise devices are ignored. * The `DirectiveSequencerTest.test_handleBlockingThenImmediatelyThenNonBockingOnSameDialogId` test fails intermittently. * On some devices, Alexa gets stuck in a permanent listening state. Pressing `t` and `h` in the Sample App doesn't exit the listening state. * Exiting the settings menu doesn't provide a message to indicate that you're back in the main menu.
2019-12-10 21:02:09 +00:00
/*
* Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
Version 1.17.0 alexa-client-sdk Changes in this update: **Enhancements** - Added support for [captions for TTS](https://developer.amazon.com/docs/avs-device-sdk/features.html#captions). This enhancement allows you to print onscreen captions for Alexa voice responses. - Added support for [SpeechSynthesizer Interface 1.3](https://developer.amazon.com/docs/alexa-voice-service/speechsynthesizer.html). This interface supports the new `captions` parameter. - Added support for [AudioPlayer Interface 1.3](https://developer.amazon.com/docs/alexa-voice-service/audioplayer.html). This interface supports the new `captions` parameter. - Added support for [Interaction Model 1.2](https://developer.amazon.com/docs/alexa-voice-service/interactionmodel-interface.html). - Added support for [System 2.0](https://developer.amazon.com/en-US/docs/alexa/alexa-voice-service/system.html). - Added support for Alarm Volume Ramp. This feature lets you to fade in alarms for a more pleasant experience. You enable alarm volume ramp in the sample app through the settings menu. - Added support for using certified senders for URI path extensions. This change allows you to specify the URI path extension when sending messages with [`CertifiedSender::sendJSONMessage`](https://alexa.github.io/avs-device-sdk/classalexa_client_s_d_k_1_1certified_sender_1_1_certified_sender.html#a4c0706d79717b226ba77d1a9c3280fe6). - Added new [`Metrics`](https://alexa.github.io/avs-device-sdk/classalexa_client_s_d_k_1_1avs_common_1_1utils_1_1_metrics.html) interfaces and helper classes. These additions help you create and consume [`Metrics`](https://alexa.github.io/avs-device-sdk/classalexa_client_s_d_k_1_1avs_common_1_1utils_1_1_metrics.html) events. - **Interfaces** - `MetricRecorderInterface`, `MetricSinkInterface`. - **Helper Classes** - `DataPointStringBuilder`, `DataPointCounterBuilder`, `DataPointDurationBuilder`, `MetricEventBuilder`. - Added support for the following AVS [endpoint](../avs-device-sdk/endpoints.html) controller capabilities: - [Alexa.ModeController](https://developer.amazon.com/docs/alexa-voice-service/alexa-modecontroller.html) - [Alexa.RangeController](https://developer.amazon.com/docs/alexa-voice-service/alexa-rangecontroller.html) - [Alexa.PowerController](https://developer.amazon.com/docs/alexa-voice-service/alexa-powercontroller.html) - [Alexa.ToggleController](https://developer.amazon.com/docs/alexa-voice-service/alexa-togglecontroller.html) - Added `PowerResourceManagerInterface`. This interface allows the SDK to control power resource levels for components such as the `AudioInputProcessor` and `SpeechSynthesizer`. - Added `AlexaInterfaceCapabilityAgent`. This Capability Agent handles common directives and endpoint controller capabilities support by [`Alexa.AlexaInterface`](../alexa-voice-service/alexa.html). - Added `AlexaInterfaceMessageSenderInterface`. This interface is required to send common events defined by the `Alexa.AlexaInterface` interface. - Added `BufferingComplete` to [`MediaPlayerObserverInterface`](https://alexa.github.io/avs-device-sdk/classalexa_client_s_d_k_1_1avs_common_1_1utils_1_1media_player_1_1_media_player_observer_interface.html). This method helps improve performance in poor networking conditions by making sure `MediaPlayer` pre-buffers correctly. - Added `SendDTMF` to `CallManagerInterface`. This method allows you to send DTMF tones during calls. **New build options** - CAPTIONS - **ADDED** [`CAPTIONS`](https://developer.amazon.com/docs/avs-device-sdk/cmake-parameters.html#captions) - **ADDED** [`LIBWEBVTT_LIB_PATH`](https://developer.amazon.com/docs/avs-device-sdk/cmake-parameters.html#captions) - **ADDED** [`LIBWEBVTT_INCLUDE_DIR`](https://developer.amazon.com/docs//avs-device-sdk/cmake-parameters.html#captions) - METRICS - **ADDED** [`METRICS`](https://developer.amazon.com/docs//avs-device-sdk/cmake-parameters.html#metrics) - ENDPONTS - **ADDED** [`ENABLE_ALL_ENDPOINT_CONTROLLERS`](https://developer.amazon.com/docs/avs-device-sdk/cmake-parameters.html#endpoints) - **ADDED** [`ENABLE_POWER_CONTROLLER`](https://developer.amazon.com/docs/avs-device-sdk/cmake-parameters.html#endpoints) - **ADDED** [`ENABLE_TOGGLE_CONTROLLER`](https://developer.amazon.com/docs/avs-device-sdk/cmake-parameters.html#endpoints) - **ADDED** [`ENABLE_RANGE_CONTROLLER`](https://developer.amazon.com/docs/avs-device-sdk/cmake-parameters.html#endpoints) - **ADDED** [`ENABLE_MODE_CONTROLLER`](https://developer.amazon.com/docs/avs-device-sdk/cmake-parameters.html#endpoints) **New dependencies** - To use captions, you must install a [new dependency](https://developer.amazon.com/docs/avs-device-sdk/dependencies) – the [libwebvtt parsing library](https://github.com/alexa/webvtt). Webvtt is a C/C++ library for interpreting and authoring conformant WebVTT content. WebVTT is a caption and subtitle format designed for use with HTML5 audio and video elements. **Bug fixes** - Fixed [`MimeResponseSink::onReceiveNonMimeData`](https://alexa.github.io/avs-device-sdk/classalexa_client_s_d_k_1_1acl_1_1_mime_response_sink.html) [data issue](https://github.com/alexa/avs-device-sdk/issues/1519) that returned invalid data. - Fixed [data type issue](https://github.com/alexa/avs-device-sdk/issues/1519) that incorrectly used `finalResponseCode` instead of [`FinalResponseCodeId`](https://github.com/alexa/avs-device-sdk/blob/master/AVSCommon/Utils/src/LibcurlUtils/LibCurlHttpContentFetcher.cpp#L370). - Fixed [`UrlContentToAttachmentConverter`](https://alexa.github.io/avs-device-sdk/classalexa_client_s_d_k_1_1playlist_parser_1_1_url_content_to_attachment_converter.html) issue that used the incorrect range parameter. - Fixed `FinallyGuard` [linking issue](https://github.com/alexa/avs-device-sdk/issues/1517) that caused problems compiling the SDK on iOS. - Fixed a [Bluetooth Capability Agent](https://alexa.github.io/avs-device-sdk/classalexa_client_s_d_k_1_1capability_agents_1_1bluetooth_1_1_bluetooth.html) bug that prevented devices from initializing. **Known Issues** * The WebVTT dependency required for `captions` isn't supported for Windows/Android. * Music playback history isn't displayed in the Alexa app for certain account and device types. * When using Gnu Compiler Collection 8+ (GCC 8+), `-Wclass-memaccess` triggers warnings. You can ignore these, they don't cause the build to fail. * Android error `libDefaultClient.so not found` might occur. Resolve this by upgrading to ADB version 1.0.40. * If a device loses a network connection, the lost connection status isn't returned via local TTS. * ACL encounters issues if it receives audio attachments but doesn't consume them. * `SpeechSynthesizerState` uses `GAINING_FOCUS` and `LOSING_FOCUS` as a workaround for handling intermediate states. * Media steamed through Bluetooth might abruptly stop. To restart playback, resume the media in the source application or toggle next/previous. * If a connected Bluetooth device is inactive, the Alexa app might indicates that audio is playing. * The Bluetooth agent assumes that the Bluetooth adapter is always connected to a power source. Disconnecting from a power source during operation isn't yet supported. * When using some products, interrupted Bluetooth playback might not resume if other content is locally streamed. * `make integration` isn't available for Android. To run Android integration tests, manually upload the test binary and input file and run ADB. * Alexa might truncate the beginning of speech when responding to text-to-speech (TTS) user events. This only impacts Raspberry Pi devices running Android Things with HDMI output audio. * A reminder TTS message doesn't play if the sample app restarts and loses a network connection. Instead, the default alarm tone plays twice. * `ServerDisconnectIntegratonTest` tests are disabled until they are updated to reflect new service behavior. * Bluetooth initialization must complete before connecting devices, otherwise devices are ignored. * The `DirectiveSequencerTest.test_handleBlockingThenImmediatelyThenNonBockingOnSameDialogId` test fails intermittently. * On some devices, Alexa gets stuck in a permanent listening state. Pressing `t` and `h` in the Sample App doesn't exit the listening state. * Exiting the settings menu doesn't provide a message to indicate that you're back in the main menu.
2019-12-10 21:02:09 +00:00
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
/// @file LibwebvttParserAdapterTest.cpp
#include <gtest/gtest.h>
#include <chrono>
#include <AVSCommon/Utils/Logger/Logger.h>
#include <AVSCommon/Utils/MediaPlayer/MockMediaPlayer.h>
#include <Captions/CaptionData.h>
#include <Captions/CaptionFormat.h>
#include <Captions/CaptionFrame.h>
#include <Captions/LibwebvttParserAdapter.h>
#include <Captions/TextStyle.h>
#include "MockCaptionManager.h"
namespace alexaClientSDK {
namespace captions {
namespace test {
using namespace ::testing;
using namespace avsCommon;
using namespace avsCommon::avs;
using namespace avsCommon::utils;
using namespace avsCommon::utils::mediaPlayer;
using namespace avsCommon::utils::mediaPlayer::test;
using namespace std::chrono;
#ifdef ENABLE_CAPTIONS
/**
* Test rig.
*/
class LibwebvttParserAdapterTest : public ::testing::Test {
public:
void SetUp() override;
void TearDown() override;
/// The system under test.
std::shared_ptr<CaptionParserInterface> m_libwebvttParser;
/// Mock CaptionManager with which to exercise the CaptionParser
std::shared_ptr<MockCaptionManager> m_mockCaptionManager;
};
void LibwebvttParserAdapterTest::SetUp() {
avsCommon::utils::logger::getConsoleLogger()->setLevel(logger::Level::DEBUG9);
m_libwebvttParser = LibwebvttParserAdapter::getInstance();
m_mockCaptionManager = std::make_shared<NiceMock<MockCaptionManager>>();
}
void LibwebvttParserAdapterTest::TearDown() {
m_libwebvttParser->addListener(nullptr);
}
/**
* Test that parse does not call onParsed with malformed WebVTT header.
*/
TEST_F(LibwebvttParserAdapterTest, test_createWithNullArgs) {
m_libwebvttParser->addListener(m_mockCaptionManager);
EXPECT_CALL(*(m_mockCaptionManager.get()), onParsed(_)).Times(0);
const CaptionData inputData = CaptionData(CaptionFormat::WEBVTT, "");
m_libwebvttParser->parse(0, inputData);
m_libwebvttParser->releaseResourcesFor(0);
}
/**
* Test that parse succeeds for a single, sane caption data and returns the same captionId back to the listener.
*/
TEST_F(LibwebvttParserAdapterTest, test_parseSingleCaptionFrame) {
m_libwebvttParser->addListener(m_mockCaptionManager);
std::vector<TextStyle> expectedStyles;
Style s0 = Style();
expectedStyles.emplace_back(TextStyle{0, s0});
std::vector<CaptionLine> expectedCaptionLines;
expectedCaptionLines.emplace_back(CaptionLine{"The time is 2:17 PM.", expectedStyles});
CaptionFrame expectedCaptionFrame = CaptionFrame(123, milliseconds(1260), milliseconds(0), expectedCaptionLines);
EXPECT_CALL(*(m_mockCaptionManager.get()), onParsed(expectedCaptionFrame)).Times(1);
const std::string webvttContent =
"WEBVTT\n"
"\n"
"1\n"
"00:00.000 --> 00:01.260\n"
"The time is 2:17 PM.";
const CaptionData inputData = CaptionData(CaptionFormat::WEBVTT, webvttContent);
m_libwebvttParser->parse(123, inputData);
m_libwebvttParser->releaseResourcesFor(123);
}
/**
* Test that parse succeeds for multiple, sane caption data and returns the appropriate captionIds back to the
* listener, along with the correct caption frame.
*/
TEST_F(LibwebvttParserAdapterTest, test_parseMultipleCaptionFrames) {
m_libwebvttParser->addListener(m_mockCaptionManager);
// Expected frame #1
std::vector<TextStyle> frame1_expectedStyles;
frame1_expectedStyles.emplace_back(TextStyle{0, Style()});
std::vector<CaptionLine> frame1_expectedCaptionLines;
frame1_expectedCaptionLines.emplace_back(CaptionLine{"The time is 2:17 PM.", frame1_expectedStyles});
CaptionFrame frame1_expectedCaptionFrame =
CaptionFrame(101, milliseconds(1260), milliseconds(0), frame1_expectedCaptionLines);
// Expected frame #2
std::vector<TextStyle> frame2_expectedStyles;
frame2_expectedStyles.emplace_back(TextStyle{0, Style()});
std::vector<CaptionLine> frame2_expectedCaptionLines;
frame2_expectedCaptionLines.emplace_back(CaptionLine{"Never drink liquid nitrogen.", frame2_expectedStyles});
CaptionFrame frame2_expectedCaptionFrame =
CaptionFrame(102, milliseconds(3000), milliseconds(1000), frame2_expectedCaptionLines);
EXPECT_CALL(*(m_mockCaptionManager.get()), onParsed(frame1_expectedCaptionFrame)).Times(1);
EXPECT_CALL(*(m_mockCaptionManager.get()), onParsed(frame2_expectedCaptionFrame)).Times(1);
const std::string frame1_webvttContent =
"WEBVTT\n"
"\n"
"1\n"
"00:00.000 --> 00:01.260\n"
"The time is 2:17 PM.";
const CaptionData frame1_inputData = CaptionData(CaptionFormat::WEBVTT, frame1_webvttContent);
const std::string frame2_webvttContent =
"WEBVTT\n"
"\n"
"00:01.000 --> 00:04.000\n"
"Never drink liquid nitrogen.";
const CaptionData frame2_inputData = CaptionData(CaptionFormat::WEBVTT, frame2_webvttContent);
m_libwebvttParser->parse(101, frame1_inputData);
m_libwebvttParser->parse(102, frame2_inputData);
m_libwebvttParser->releaseResourcesFor(101);
m_libwebvttParser->releaseResourcesFor(102);
}
/**
* Test that parse succeeds for a single, sane caption data and returns multiple caption frames, both with the same ID.
*/
TEST_F(LibwebvttParserAdapterTest, test_parseSingleCaptionDataToCaptionFrames) {
m_libwebvttParser->addListener(m_mockCaptionManager);
// Expected frame #1
std::vector<CaptionLine> frame1_expectedCaptionLines;
frame1_expectedCaptionLines.emplace_back(CaptionLine{"Never drink liquid nitrogen.", {TextStyle{0, Style()}}});
CaptionFrame frame1_expectedCaptionFrame =
CaptionFrame(101, milliseconds(3000), milliseconds(0), frame1_expectedCaptionLines);
// Expected frame #2
std::vector<CaptionLine> frame2_expectedCaptionLines;
frame2_expectedCaptionLines.emplace_back(CaptionLine{"- It will perforate your stomach.", {TextStyle{0, Style()}}});
frame2_expectedCaptionLines.emplace_back(CaptionLine{"- You could die.", {TextStyle{0, Style()}}});
CaptionFrame frame2_expectedCaptionFrame =
CaptionFrame(101, milliseconds(4000), milliseconds(0), frame2_expectedCaptionLines);
EXPECT_CALL(*(m_mockCaptionManager.get()), onParsed(frame1_expectedCaptionFrame)).Times(1);
EXPECT_CALL(*(m_mockCaptionManager.get()), onParsed(frame2_expectedCaptionFrame)).Times(1);
const std::string webvttContent =
"WEBVTT\n"
"\n"
"00:00.000 --> 00:03.000\n"
"Never drink liquid nitrogen.\n"
"\n"
"00:03.000 --> 00:07.000\n"
"- It will perforate your stomach.\n"
"- You could die.";
const CaptionData inputData = CaptionData(CaptionFormat::WEBVTT, webvttContent);
m_libwebvttParser->parse(101, inputData);
m_libwebvttParser->releaseResourcesFor(101);
}
/**
* Test that parse honors a time gap between two caption frames.
*/
TEST_F(LibwebvttParserAdapterTest, test_parseDelayBetweenCaptionFrames) {
m_libwebvttParser->addListener(m_mockCaptionManager);
// Expected frame #1
std::vector<CaptionLine> frame1_expectedCaptionLines;
frame1_expectedCaptionLines.emplace_back(CaptionLine{"Never drink liquid nitrogen.", {TextStyle{0, Style()}}});
CaptionFrame frame1_expectedCaptionFrame =
CaptionFrame(101, milliseconds(3000), milliseconds(1000), frame1_expectedCaptionLines);
// Expected frame #2
std::vector<CaptionLine> frame2_expectedCaptionLines;
frame2_expectedCaptionLines.emplace_back(CaptionLine{"- It will perforate your stomach.", {TextStyle{0, Style()}}});
frame2_expectedCaptionLines.emplace_back(CaptionLine{"- You could die.", {TextStyle{0, Style()}}});
CaptionFrame frame2_expectedCaptionFrame =
CaptionFrame(101, milliseconds(4000), milliseconds(1000), frame2_expectedCaptionLines);
EXPECT_CALL(*(m_mockCaptionManager.get()), onParsed(frame1_expectedCaptionFrame)).Times(1);
EXPECT_CALL(*(m_mockCaptionManager.get()), onParsed(frame2_expectedCaptionFrame)).Times(1);
const std::string webvttContent =
"WEBVTT\n"
"\n"
"00:01.000 --> 00:04.000\n"
"Never drink liquid nitrogen.\n"
"\n"
"00:05.000 --> 00:09.000\n"
"- It will perforate your stomach.\n"
"- You could die.";
const CaptionData inputData = CaptionData(CaptionFormat::WEBVTT, webvttContent);
m_libwebvttParser->parse(101, inputData);
m_libwebvttParser->releaseResourcesFor(101);
}
/**
* Test that parse converts the bold tag to bold style.
*/
TEST_F(LibwebvttParserAdapterTest, test_parseBoldTagToStyle) {
m_libwebvttParser->addListener(m_mockCaptionManager);
std::vector<TextStyle> expectedStyles;
Style s0 = Style();
expectedStyles.emplace_back(TextStyle{0, s0});
Style s1 = Style();
s1.m_bold = true;
expectedStyles.emplace_back(TextStyle{4, s1});
Style s2 = Style();
s2.m_bold = false;
expectedStyles.emplace_back(TextStyle{8, s2});
std::vector<CaptionLine> expectedCaptionLines;
expectedCaptionLines.emplace_back(CaptionLine{"The time is 2:17 PM.", expectedStyles});
CaptionFrame expectedCaptionFrame = CaptionFrame(123, milliseconds(1260), milliseconds(0), expectedCaptionLines);
EXPECT_CALL(*(m_mockCaptionManager.get()), onParsed(expectedCaptionFrame)).Times(1);
const std::string webvttContent =
"WEBVTT\n"
"\n"
"1\n"
"00:00.000 --> 00:01.260\n"
"The <b>time</b> is 2:17 PM.";
const CaptionData inputData = CaptionData(CaptionFormat::WEBVTT, webvttContent);
m_libwebvttParser->parse(123, inputData);
m_libwebvttParser->releaseResourcesFor(123);
}
/**
* Test that parse converts the italic tag to italic style.
*/
TEST_F(LibwebvttParserAdapterTest, test_parseItalicTagToStyle) {
m_libwebvttParser->addListener(m_mockCaptionManager);
std::vector<TextStyle> expectedStyles;
Style s0 = Style();
expectedStyles.emplace_back(TextStyle{0, s0});
Style s1 = Style();
s1.m_italic = true;
expectedStyles.emplace_back(TextStyle{4, s1});
Style s2 = Style();
s2.m_italic = false;
expectedStyles.emplace_back(TextStyle{8, s2});
std::vector<CaptionLine> expectedCaptionLines;
expectedCaptionLines.emplace_back(CaptionLine{"The time is 2:17 PM.", expectedStyles});
CaptionFrame expectedCaptionFrame = CaptionFrame(123, milliseconds(1260), milliseconds(0), expectedCaptionLines);
EXPECT_CALL(*(m_mockCaptionManager.get()), onParsed(expectedCaptionFrame)).Times(1);
const std::string webvttContent =
"WEBVTT\n"
"\n"
"1\n"
"00:00.000 --> 00:01.260\n"
"The <i>time</i> is 2:17 PM.";
const CaptionData inputData = CaptionData(CaptionFormat::WEBVTT, webvttContent);
m_libwebvttParser->parse(123, inputData);
m_libwebvttParser->releaseResourcesFor(123);
}
#endif
} // namespace test
} // namespace captions
} // namespace alexaClientSDK