From d056c92910d4939dfd7ed4dce0697b57f72051ca Mon Sep 17 00:00:00 2001 From: Jamie Meyers Date: Tue, 13 Jun 2017 17:29:27 -0700 Subject: [PATCH] Update README.md to fix broken links. In addition to fixing broken links, duplicated words have been removed and trailing whitespace stripped. --- README.md | 260 +++++++++++++++++++++++++++--------------------------- 1 file changed, 130 insertions(+), 130 deletions(-) diff --git a/README.md b/README.md index 3d242fd2..2503ab92 100644 --- a/README.md +++ b/README.md @@ -24,15 +24,15 @@ This architecture diagram illustrates the data flows between components that com ![SDK Architecture Diagram](https://images-na.ssl-images-amazon.com/images/G/01/mobile-apps/dex/alexa/alexa-voice-service/docs/avs-cpp-sdk-architecture-20170601.png) -**Audio Signal Processor (ASP)** - Applies signal processing algorithms to both input and output audio channels. The applied algorithms are designed to produce clean audio data and include, but are not limited to: acoustic echo cancellation (AEC), beam forming (fixed or adaptive), voice activity detection (VAD), and dynamic range compression (DRC). If a multi-microphone array is present, the ASP constructs and outputs a single audio stream for the array. +**Audio Signal Processor (ASP)** - Applies signal processing algorithms to both input and output audio channels. The applied algorithms are designed to produce clean audio data and include, but are not limited to: acoustic echo cancellation (AEC), beam forming (fixed or adaptive), voice activity detection (VAD), and dynamic range compression (DRC). If a multi-microphone array is present, the ASP constructs and outputs a single audio stream for the array. **Shared Data Stream (SDS)** - A single producer, multi-consumer buffer that allows for the transport of any type of data between a single writer and one or more readers. SDS performs two key tasks: 1) it passes audio data between the audio front end (or Audio Signal Processor), the wake word engine, and the Alexa Communications Library (ACL) before sending to AVS; 2) it passes data attachments sent by AVS to specific capability agents via the ACL. -SDS is implemented atop a ring buffer on a product-specific memory segment (or user-specified), which allows it to be used for in-process or interprocess communication. Keep in mind, the writer and reader(s) may be in different threads or processes. +SDS is implemented atop a ring buffer on a product-specific memory segment (or user-specified), which allows it to be used for in-process or interprocess communication. Keep in mind, the writer and reader(s) may be in different threads or processes. **Wake Word Engine (WWE)** - Spots wake words in an input stream. It is comprised of two binary interfaces. The first handles wake word spotting (or detection), and the second handles specific wake word models (in this case "Alexa"). Depending on your implementation, the WWE may run on the system on a chip (SOC) or dedicated chip, like a digital signal processor (DSP). -**Audio Input Processor (AIP)** - Handles audio input that is sent to AVS via the ACL. These include on-device microphones, remote microphones, an other audio input sources. +**Audio Input Processor (AIP)** - Handles audio input that is sent to AVS via the ACL. These include on-device microphones, remote microphones, an other audio input sources. The AIP also includes the logic to switch between different audio input sources. Only one audio input source can be sent to AVS at a given time. @@ -41,7 +41,7 @@ The AIP also includes the logic to switch between different audio input sources. * Establishes and maintains long-lived persistent connections with AVS. ACL adheres to the messaging specification detailed in [Managing an HTTP/2 Conncetion with AVS](https://developer.amazon.com/public/solutions/alexa/alexa-voice-service/docs/managing-an-http-2-connection). * Provides message sending and receiving capabilities, which includes support JSON-formatted text, and binary audio content. For additional information, see [Structuring an HTTP/2 Request to AVS](https://developer.amazon.com/public/solutions/alexa/alexa-voice-service/docs/avs-http2-requests). -**Alexa Directive Sequencer Library (ADSL)**: Manages the order and sequence of directives from AVS, as detailed in the [AVS Interaction Model](https://developer.amazon.com/public/solutions/alexa/alexa-voice-service/reference/interaction-model#channels). This component manages the lifecycle of each directive, and informs the Directive Handler (which may or may not be a Capability Agent) to handle the message. +**Alexa Directive Sequencer Library (ADSL)**: Manages the order and sequence of directives from AVS, as detailed in the [AVS Interaction Model](https://developer.amazon.com/public/solutions/alexa/alexa-voice-service/reference/interaction-model#channels). This component manages the lifecycle of each directive, and informs the Directive Handler (which may or may not be a Capability Agent) to handle the message. See [**Appendix B**](#appendix-b-directive-lifecycle-diagram) for a diagram of the directive lifecycle. @@ -72,57 +72,57 @@ Focus management is not specific to Capability Agents or Directive Handlers, and * [Doxygen 1.8.13](http://www.stack.nl/~dimitri/doxygen/download.html) or later (required to build API documentation) Building the reference implementation of the `MediaPlayerInterface` (the class `MediaPlayer`) is optional, but requires: -* [GStreamer 1.8](https://gstreamer.freedesktop.org/documentation/installing/index.html) or later and the following GStreamer plug-ins: +* [GStreamer 1.8](https://gstreamer.freedesktop.org/documentation/installing/index.html) or later and the following GStreamer plug-ins: * [GStreamer Base Plugins 1.8](https://gstreamer.freedesktop.org/releases/gst-plugins-base/1.8.0.html) or later. * [GStreamer Good Plugins 1.8](https://gstreamer.freedesktop.org/releases/gst-plugins-good/1.8.0.html) or later. * [GStreamer Libav Plugin 1.8](https://gstreamer.freedesktop.org/releases/gst-libav/1.8.0.html) or later **OR** [GStreamer Ugly Plugins 1.8](https://gstreamer.freedesktop.org/releases/gst-plugins-ugly/1.8.0.html) or later, for decoding MP3 data. - -**NOTE**: The plugins may depend on libraries which need to be installed as well for the GStreamer based `MediaPlayer` to work correctly. + +**NOTE**: The plugins may depend on libraries which need to be installed as well for the GStreamer based `MediaPlayer` to work correctly. ## Prerequisites Before you create your build, you'll need to install some software that is required to run `AuthServer`. `AuthServer` is a minimal authorization server built in Python using Flask. It provides an easy way to obtain your first refresh token, which will be used for integration tests and obtaining access token that are required for all interactions with AVS. -**IMPORTANT NOTE**: `AuthServer` is for testing purposed only. A commercial product is expected to obtain Login with Amazon (LWA) credentials using the instructions provided on the Amazon Developer Portal for **Remote Authorization** and **Local Authorization**. For additional information, see [AVS Authorization](https://developer.amazon.com/public/solutions/alexa/alexa-voice-service/content/avs-api-overview#authorization). +**IMPORTANT NOTE**: `AuthServer` is for testing purposed only. A commercial product is expected to obtain Login with Amazon (LWA) credentials using the instructions provided on the Amazon Developer Portal for **Remote Authorization** and **Local Authorization**. For additional information, see [AVS Authorization](https://developer.amazon.com/public/solutions/alexa/alexa-voice-service/content/avs-api-overview#authorization). -### Step 1: Install `pip` +### Step 1: Install `pip` -If `pip` isn't installed on your system, follow the detailed install instructions [here](https://packaging.python.org/installing/#install-pip-setuptools-and-wheel). +If `pip` isn't installed on your system, follow the detailed install instructions [here](https://packaging.python.org/installing/#install-pip-setuptools-and-wheel). ### Step 2: Install `flask` and `requests` -For Windows run this command: +For Windows run this command: ``` -pip install flask requests -``` +pip install flask requests +``` -For Unix/Mac run this command: +For Unix/Mac run this command: ``` pip install --user flask requests -``` +``` -### Step 3: Obtain Your Device Type ID, Cliend ID, and Client Secret +### Step 3: Obtain Your Device Type ID, Cliend ID, and Client Secret If you haven't already, follow these instructions to [register a product and create a security profile](https://github.com/alexa/alexa-avs-sample-app/wiki/Create-Security-Profile). -Make sure you note the following, you'll need these later when you configure `AuthServer`: +Make sure you note the following, you'll need these later when you configure `AuthServer`: -* Device Type ID -* Client ID +* Device Type ID +* Client ID * Client Secret -**IMPORTANT NOTE**: Make sure that you've set your **Allowed Origins** and **Allowed Return URLs** in the **Web Settings Tab**: -* Allowed Origins: http://localhost:3000 -* Allowed Return URLs: http://localhost:3000/authresponse +**IMPORTANT NOTE**: Make sure that you've set your **Allowed Origins** and **Allowed Return URLs** in the **Web Settings Tab**: +* Allowed Origins: http://localhost:3000 +* Allowed Return URLs: http://localhost:3000/authresponse ## Create an Out-of-Source Build The following instructions assume that all requirements and dependencies are met and that you have cloned the repository (or saved the tarball locally). -### CMake Build Types and Options +### CMake Build Types and Options The following build types are supported: @@ -131,19 +131,19 @@ The following build types are supported: * `MINSIZEREL` - Compiles with `RELEASE` flags and optimizations (`-O`s) for a smaller build size. To specify a build type, use this command in place of step 4 below (see [Build for Generic Linux](#generic-linux) or [Build for macOS](#build-for-macosß)): -`cmake -DCMAKE_BUILD_TYPE=` +`cmake -DCMAKE_BUILD_TYPE=` -### Build with a Wake Word Detector +### Build with a Wake Word Detector -The Alexa Client SDK supports wake word detectors from [Sensory](https://github.com/Sensory/alexa-rpi) and [KITT.ai](https://github.com/Kitt-AI/snowboy/). The following options are required to build with a wake word detector, please replace `` with `SENSORY` for Sensory, and `KITTAI` for KITT.ai: +The Alexa Client SDK supports wake word detectors from [Sensory](https://github.com/Sensory/alexa-rpi) and [KITT.ai](https://github.com/Kitt-AI/snowboy/). The following options are required to build with a wake word detector, please replace `` with `SENSORY` for Sensory, and `KITTAI` for KITT.ai: -* `-D_KEY_WORD_DETECTOR=` - Specifies if the wake word detector is enabled or disabled during build. -* `-D_KEY_WORD_DETECTOR_LIB_PATH=` - The path to the wake word detector library. -* `-D_KEY_WORD_DETECTOR_INCLUDE_DIR=` - The path to the wake word detector include directory. +* `-D_KEY_WORD_DETECTOR=` - Specifies if the wake word detector is enabled or disabled during build. +* `-D_KEY_WORD_DETECTOR_LIB_PATH=` - The path to the wake word detector library. +* `-D_KEY_WORD_DETECTOR_INCLUDE_DIR=` - The path to the wake word detector include directory. -**Note**: To list all available CMake options, use the following command: `-LH`. +**Note**: To list all available CMake options, use the following command: `-LH`. -#### Sensory +#### Sensory If using the Sensory wake word detector, version [5.0.0-beta.10.2](https://github.com/Sensory/alexa-rpi) or later is required. @@ -155,23 +155,23 @@ cmake -DSENSORY_KEY_WORD_DETECTOR=ON -DSENSORY_KEY_WORD_DETECTO Note that you may need to license the Sensory library for use prior to running cmake and building it into the SDK. A script to license the Sensory library can be found on the Sensory [Github](https://github.com/Sensory/alexa-rpi) page under `bin/license.sh`. -#### KITT.ai +#### KITT.ai -A matrix calculation library, known as BLAS, is required to use KITT.ai. The following are sample commands to install this library: -* Generic Linux - `apt-get install libatlas-base-dev` -* macOS - `brew install homebrew/science/openblas` +A matrix calculation library, known as BLAS, is required to use KITT.ai. The following are sample commands to install this library: +* Generic Linux - `apt-get install libatlas-base-dev` +* macOS - `brew install homebrew/science/openblas` This is an example `cmake` command to build with KITT.ai: ``` cmake -DKITTAI_KEY_WORD_DETECTOR=ON -DKITTAI_KEY_WORD_DETECTOR_LIB_PATH=.../snowboy-1.2.0/lib/libsnowboy-detect.a -DKITTAI_KEY_WORD_DETECTOR_INCLUDE_DIR=.../snowboy-1.2.0/include -``` +``` ### Build with an implementation of `MediaPlayer` -`MediaPlayer` (the reference implementation of the `MediaPlayerInterface`) is based upon [GStreamer](https://gstreamer.freedesktop.org/) and is not built by default. To build 'MediaPlayer' the `-DGSTREAMER_MEDIA_PLAYER=ON` option must be specified to CMake. +`MediaPlayer` (the reference implementation of the `MediaPlayerInterface`) is based upon [GStreamer](https://gstreamer.freedesktop.org/) and is not built by default. To build 'MediaPlayer' the `-DGSTREAMER_MEDIA_PLAYER=ON` option must be specified to CMake. -If GStreamer was [installed from source](https://gstreamer.freedesktop.org/documentation/frequently-asked-questions/getting.html) the prefix path provided when building must be specified to CMake with the `DCMAKE_PREFIX_PATH` option. This is an example CMake command: +If GStreamer was [installed from source](https://gstreamer.freedesktop.org/documentation/frequently-asked-questions/getting.html) the prefix path provided when building must be specified to CMake with the `DCMAKE_PREFIX_PATH` option. This is an example CMake command: ``` cmake -DGSTREAMER_MEDIA_PLAYER=ON -DCMAKE_PREFIX_PATH= @@ -184,8 +184,8 @@ To create an out-of-source build for Linux: 1. Clone the repository (or download and extract the tarball). 2. Create a build directory out-of-source. **Important**: The directory cannot be a subdirectory of the source folder. 3. `cd` into your build directory. -4. From your build directory, run `cmake` on the source directory to generate make files for the SDK: `cmake `. -5. After you've successfully run `cmake`, you should see the following message: `-- Please fill /Integration/AlexaClientSDKConfig.json before you execute integration tests.`. Open `Integration/AlexaClientSDKConfig.json` with your favorite text editor and fill in your product information (which you got from the developer portal when registering a product and creating a security profile). It should look like this: +4. From your build directory, run `cmake` on the source directory to generate make files for the SDK: `cmake `. +5. After you've successfully run `cmake`, you should see the following message: `-- Please fill /Integration/AlexaClientSDKConfig.json before you execute integration tests.`. Open `Integration/AlexaClientSDKConfig.json` with your favorite text editor and fill in your product information (which you got from the developer portal when registering a product and creating a security profile). It should look like this: ```json { "authDelegate":{ @@ -195,9 +195,9 @@ To create an out-of-source build for Linux: "deviceSerialNumber":"" } } - ``` - **NOTE**: The `deviceSerialNumber` is a unique identifier that you create. It is **not** provided by Amazon. -6. From the build directory, run `make` to build the SDK. + ``` + **NOTE**: The `deviceSerialNumber` is a unique identifier that you create. It is **not** provided by Amazon. +6. From the build directory, run `make` to build the SDK. ### Build for macOS @@ -218,9 +218,9 @@ To create an out-of-source build for macOS: 1. Clone the repository (or download and extract the tarball). 2. Create a build directory out-of-source. **Important**: The directory cannot be a subdirectory of the source folder. -3. `cd` into your build directory. -4. From your build directory, run `cmake` on the source directory to generate make files for the SDK: `cmake `. -5. After you've successfully run `cmake`, you should see the following message: `-- Please fill /Integration/AlexaClientSDKConfig.json before you execute integration tests.`. Open `Integration/AlexaClientSDKConfig.json` with your favorite text editor and fill in your product information (which you got from the developer portal when registering a product and creating a security profile). It should look like this: +3. `cd` into your build directory. +4. From your build directory, run `cmake` on the source directory to generate make files for the SDK: `cmake `. +5. After you've successfully run `cmake`, you should see the following message: `-- Please fill /Integration/AlexaClientSDKConfig.json before you execute integration tests.`. Open `Integration/AlexaClientSDKConfig.json` with your favorite text editor and fill in your product information (which you got from the developer portal when registering a product and creating a security profile). It should look like this: ```json { "authDelegate":{ @@ -230,46 +230,46 @@ To create an out-of-source build for macOS: "deviceSerialNumber":"" } } - ``` - **NOTE**: The `deviceSerialNumber` is a unique identifier that you create. It is **not** provided by Amazon. -6. From the build directory, run `make` to build the SDK. + ``` + **NOTE**: The `deviceSerialNumber` is a unique identifier that you create. It is **not** provided by Amazon. +6. From the build directory, run `make` to build the SDK. -## Run `AuthServer` +## Run `AuthServer` After you've created your out-of-source build, the next step is to run `AuthServer` to retrieve a valid refresh token from LWA. -* Run this command to start `AuthServer`: +* Run this command to start `AuthServer`: ``` - python AuthServer/AuthServer.py - ``` - You should see a message that indicates the server is running. -* Open your favorite browser and navigate to: `http://localhost:3000` -* Follow the on-screen instructions. -* After you've entered your credentials, the server should terminate itself, and `Integration/AlexaClientSDKConfig.json` will be populated with your refresh token. -* Before you proceed, it's important that you make sure the refresh token is in `Integration/AlexaClientSDKConfig.json`. + python AuthServer/AuthServer.py + ``` + You should see a message that indicates the server is running. +* Open your favorite browser and navigate to: `http://localhost:3000` +* Follow the on-screen instructions. +* After you've entered your credentials, the server should terminate itself, and `Integration/AlexaClientSDKConfig.json` will be populated with your refresh token. +* Before you proceed, it's important that you make sure the refresh token is in `Integration/AlexaClientSDKConfig.json`. ## Run Unit Tests Unit tests for the Alexa Client SDK use the [Google Test](https://github.com/google/googletest) framework. Ensure that the [Google Test](https://github.com/google/googletest) is installed, then run the following command: `make all test` -Ensure that all tests are passed before you begin integration testing. +Ensure that all tests are passed before you begin integration testing. -### Run Unit Tests with Sensory Enabled +### Run Unit Tests with Sensory Enabled -In order to run unit tests for the Sensory wake word detector, the following files must be downloaded from [GitHub](https://github.com/Sensory/alexa-rpi) and placed in `KWD/inputs/SensoryModels` for the integration tests to run properly: +In order to run unit tests for the Sensory wake word detector, the following files must be downloaded from [GitHub](https://github.com/Sensory/alexa-rpi) and placed in `KWD/inputs/SensoryModels` for the integration tests to run properly: -* [`spot-alexa-rpi-31000.snsr`](https://github.com/Sensory/alexa-rpi/blob/master/models/spot-alexa-rpi-31000.snsr) +* [`spot-alexa-rpi-31000.snsr`](https://github.com/Sensory/alexa-rpi/blob/master/models/spot-alexa-rpi-31000.snsr) -### Run Unit Tests with KITT.ai Enabled +### Run Unit Tests with KITT.ai Enabled In order to run unit tests for the KITT.ai wake word detector, the following files must be downloaded from [GitHub](https://github.com/Kitt-AI/snowboy/tree/master/resources) and placed in `KWD/inputs/KittAiModels`: -* [`common.res`](https://github.com/Kitt-AI/snowboy/tree/master/resources) -* [`alexa.umdl`](https://github.com/Kitt-AI/snowboy/tree/master/resources/alexa/alexa-avs-sample-app) - It's important that you download the `alexa.umdl` in `resources/alexa/alexa-avs-sample-app` for the KITT.ai unit tests to run properly. +* [`common.res`](https://github.com/Kitt-AI/snowboy/tree/master/resources) +* [`alexa.umdl`](https://github.com/Kitt-AI/snowboy/tree/master/resources/alexa/alexa-avs-sample-app) - It's important that you download the `alexa.umdl` in `resources/alexa/alexa-avs-sample-app` for the KITT.ai unit tests to run properly. -## Run Integration Tests +## Run Integration Tests -Integration tests ensure that your build can make a request and receive a response from AVS. **All requests to AVS require auth credentials.** +Integration tests ensure that your build can make a request and receive a response from AVS. **All requests to AVS require auth credentials.** **Important**: Integration tests reference an `AlexaClientSDKConfig.json` file, which you must create. See the `Create the AlexaClientSDKConfig.json file` section (above), if you have not already done this. @@ -279,58 +279,58 @@ To exercise the integration tests run this command: ### Run Integration Tests with Sensory Enabled -If the project was built with the Sensory wake word detector, the following files must be downloaded from [GitHub](https://github.com/Sensory/alexa-rpi) and placed in `Integration/inputs/SensoryModels` for the integration tests to run properly: +If the project was built with the Sensory wake word detector, the following files must be downloaded from [GitHub](https://github.com/Sensory/alexa-rpi) and placed in `Integration/inputs/SensoryModels` for the integration tests to run properly: -* [`spot-alexa-rpi-31000.snsr`](https://github.com/Sensory/alexa-rpi/blob/master/models/spot-alexa-rpi-31000.snsr) +* [`spot-alexa-rpi-31000.snsr`](https://github.com/Sensory/alexa-rpi/blob/master/models/spot-alexa-rpi-31000.snsr) ### Run Integration Tests with KITT.ai Enabled If the project was built with the KITT.ai wake word detector, the following files must be downloaded from [GitHub](https://github.com/Kitt-AI/snowboy/tree/master/resources) and placed in `Integration/inputs/KittAiModels` for the integration tests to run properly: -* [`common.res`](https://github.com/Kitt-AI/snowboy/tree/master/resources) -* [`alexa.umdl`](https://github.com/Kitt-AI/snowboy/tree/master/resources/alexa/alexa-avs-sample-app) - It's important that you download the `alexa.umdl` in `resources/alexa/alexa-avs-sample-app` for the KITT.ai integration tests to run properly. +* [`common.res`](https://github.com/Kitt-AI/snowboy/tree/master/resources) +* [`alexa.umdl`](https://github.com/Kitt-AI/snowboy/tree/master/resources/alexa/alexa-avs-sample-app) - It's important that you download the `alexa.umdl` in `resources/alexa/alexa-avs-sample-app` for the KITT.ai integration tests to run properly. -## Alexa Client SDK API Documentation +## Alexa Client SDK API Documentation -To build the Alexa Client SDK API documentation, run this command from your build directory: `make doc`. +To build the Alexa Client SDK API documentation, run this command from your build directory: `make doc`. ## Resources and Guides * [Step-by-step instructions to optimize libcurl for size in `*nix` systems](https://github.com/alexa/alexa-client-sdk/wiki/optimize-libcurl). * [Step-by-step instructions to build libcurl with mbed TLS and nghttp2 for `*nix` systems](https://github.com/alexa/alexa-client-sdk/wiki/build-libcurl-with-mbed-TLS-and-nghttp2). -## Appendix A: Memory Profile +## Appendix A: Memory Profile -This appendix provides the memory profiles for various modules of the Alexa Client SDK. The numbers were observed running integration tests on a machine running Ubuntu 16.04.2 LTS. +This appendix provides the memory profiles for various modules of the Alexa Client SDK. The numbers were observed running integration tests on a machine running Ubuntu 16.04.2 LTS. -| Module | Source Code Size (Bytes) | Library Size RELEASE Build (libxxx.so) (Bytes) | Library Size MINSIZEREL Build (libxxx.so) (Bytes) | -|--------|--------------------------|------------------------------------------------|---------------------------------------------------| -| ACL | 356 KB | 250 KB | 239 KB | -| ADSL | 224 KB | 175 KB | 159 KB | -| AFML | 80 KB | 133 KB | 126 KB | -| ContextManager | 84 KB | 122 KB | 116 KB | -| AIP | 184 KB | 340 KB | 348 KB | -| SpeechSynthesizer | 120 KB | 311 KB | 321 KB | -| AVSCommon | 772 KB | 252 KB | 228 KB | -| AVSUtils | 332 KB | 167 KB | 133 KB | -| Total | 2152 KB | 1750 KB | 1670 KB | +| Module | Source Code Size (Bytes) | Library Size RELEASE Build (libxxx.so) (Bytes) | Library Size MINSIZEREL Build (libxxx.so) (Bytes) | +|--------|--------------------------|------------------------------------------------|---------------------------------------------------| +| ACL | 356 KB | 250 KB | 239 KB | +| ADSL | 224 KB | 175 KB | 159 KB | +| AFML | 80 KB | 133 KB | 126 KB | +| ContextManager | 84 KB | 122 KB | 116 KB | +| AIP | 184 KB | 340 KB | 348 KB | +| SpeechSynthesizer | 120 KB | 311 KB | 321 KB | +| AVSCommon | 772 KB | 252 KB | 228 KB | +| AVSUtils | 332 KB | 167 KB | 133 KB | +| Total | 2152 KB | 1750 KB | 1670 KB | **Runtime Memory** Unique size set (USS) and proportional size set (PSS) were measured by SMEM while integration tests were run. -| Runtime Memory | Average USS | Max USS (Bytes) | Average PSS | Max PSS (Bytes) | -|----------------|-------------|-----------------|-------------|-----------------| -| ACL | 8 MB | 15 MB | 8 MB | 16 MB | -| ADSL + ACL | 8 MB MB | 20 MB | 9 MB | 21 MB | -| AIP | 9 MB | 12 MB | 9 MB | 13 MB | -| ** SpeechSynthesizer | 11 MB | 18 MB | 12 MB | 20 MB | +| Runtime Memory | Average USS | Max USS (Bytes) | Average PSS | Max PSS (Bytes) | +|----------------|-------------|-----------------|-------------|-----------------| +| ACL | 8 MB | 15 MB | 8 MB | 16 MB | +| ADSL + ACL | 8 MB | 20 MB | 9 MB | 21 MB | +| AIP | 9 MB | 12 MB | 9 MB | 13 MB | +| SpeechSynthesizer** | 11 MB | 18 MB | 12 MB | 20 MB | -** This test was run using the GStreamer-based MediaPlayer for audio playback. +***This test was run using the GStreamer-based MediaPlayer for audio playback.* **Definitions** -* **USS**: The amount of memory that is private to the process and not shared with any other processes. -* **PSS**: The amount of memory shared with other processes; divided by the number of processes sharing each page. +* **USS**: The amount of memory that is private to the process and not shared with any other processes. +* **PSS**: The amount of memory shared with other processes; divided by the number of processes sharing each page. ## Appendix B: Directive Lifecycle Diagram @@ -338,7 +338,7 @@ Unique size set (USS) and proportional size set (PSS) were measured by SMEM whil ## Appendix C: Runtime Configuration of path to CA Certificates -By default libcurl is built with paths to a CA bundle and a directory containing CA certificates. You can direct the Alexa Client SDK to configure libcurl to use an additional path to directories containing CA certificates via the [CURLOPT_CAPATH](https://curl.haxx.se/libcurl/c/CURLOPT_CAPATH.html) setting. This is done by adding a `"libcurlUtils/CURLOPT_CAPATH"` entry to the `AlexaClientSDKConfig.json` file. Here is an example: +By default libcurl is built with paths to a CA bundle and a directory containing CA certificates. You can direct the Alexa Client SDK to configure libcurl to use an additional path to directories containing CA certificates via the [CURLOPT_CAPATH](https://curl.haxx.se/libcurl/c/CURLOPT_CAPATH.html) setting. This is done by adding a `"libcurlUtils/CURLOPT_CAPATH"` entry to the `AlexaClientSDKConfig.json` file. Here is an example: ``` { @@ -354,63 +354,63 @@ By default libcurl is built with paths to a CA bundle and a directory containing ``` **Note** If you want to assure that libcurl is *only* using CA certificates from this path you may need to reconfigure libcurl with the `--without-ca-bundle` and `--without-ca-path` options and rebuild it to suppress the default paths. See [The libcurl documention](https://curl.haxx.se/docs/sslcerts.html) for more information. -## Release Notes +## Release Notes -v0.4.1 released 6/9/2017: +v0.4.1 released 6/9/2017: * Implemented Sensory wake word detector functionality -* Removed the need for a `std::recursive_mutex` in `MessageRouter` +* Removed the need for a `std::recursive_mutex` in `MessageRouter` * Added AIP unit test * Added `handleDirectiveImmediately` functionality to `SpeechSynthesizer` * Added memory profiles for: - * AIP - * SpeechSynthesizer - * ContextManager - * AVSUtils - * AVSCommon + * AIP + * SpeechSynthesizer + * ContextManager + * AVSUtils + * AVSCommon * Bug fix for `MultipartParser.h` compiler warning * Suppression of sensitive log data even in debug builds. Use cmake parameter -DACSDK_EMIT_SENSITIVE_LOGS=ON to allow logging of sensitive information in DEBUG builds * Fix crash in ACL when attempting to use more than 10 streams * Updated MediaPlayer to use `autoaudiosink` instead of requiring `pulseaudio` * Updated MediaPlayer build to suppport local builds of GStreamer * Fixes for the following Github issues - * [https://github.com/alexa/alexa-client-sdk/issues/5](MessageRouter::send() does not take the m_connectionMutex) - * [https://github.com/alexa/alexa-client-sdk/issues/8](MessageRouter::disconnectAllTransportsLocked flow leads to erase while iterating transports vector) - * [https://github.com/alexa/alexa-client-sdk/issues/9](Build errors when building with KittAi enabled) - * [https://github.com/alexa/alexa-client-sdk/issues/10](HTTP2Transport race may lead to deadlock) - * [https://github.com/alexa/alexa-client-sdk/issues/17](Crash in HTTP2Transport::cleanupFinishedStreams()) - * [https://github.com/alexa/alexa-client-sdk/issues/24](The attachment writer interface should take a `const void*` instead of `void*`) + * [MessageRouter::send() does not take the m_connectionMutex](https://github.com/alexa/alexa-client-sdk/issues/5) + * [MessageRouter::disconnectAllTransportsLocked flow leads to erase while iterating transports vector](https://github.com/alexa/alexa-client-sdk/issues/8) + * [Build errors when building with KittAi enabled](https://github.com/alexa/alexa-client-sdk/issues/9) + * [HTTP2Transport race may lead to deadlock](https://github.com/alexa/alexa-client-sdk/issues/10) + * [Crash in HTTP2Transport::cleanupFinishedStreams()](https://github.com/alexa/alexa-client-sdk/issues/17) + * [The attachment writer interface should take a `const void*` instead of `void*`](https://github.com/alexa/alexa-client-sdk/issues/24) -v0.4 updated 5/31/2017: +v0.4 updated 5/31/2017: -* Added `AuthServer`, an authorization server implementation used to retrieve refresh tokens from LWA. +* Added `AuthServer`, an authorization server implementation used to retrieve refresh tokens from LWA. v0.4 release 5/24/2017: -* Added the `SpeechSynthesizer`, an implementation of the `SpeechRecognizer` capability agent. +* Added the `SpeechSynthesizer`, an implementation of the `SpeechRecognizer` capability agent. * Implemented a reference `MediaPlayer` based on [GStreamer](https://gstreamer.freedesktop.org/) for audio playback. - * Added the `MediaPlayerInterface` that allows you to implement your own media player. -* Updated `ACL` to support asynchronous receipt of audio attachments from AVS. -* Bug Fixes: - * Some intermittent unit test failures were fixed. -* Known Issues: + * Added the `MediaPlayerInterface` that allows you to implement your own media player. +* Updated `ACL` to support asynchronous receipt of audio attachments from AVS. +* Bug Fixes: + * Some intermittent unit test failures were fixed. +* Known Issues: * `ACL`'s asynchronous receipt of audio attachments may manage resources poorly in scenarios where attachments are received but not consumed. * When an `AttachmentReader` does not deliver data for prolonged periods `MediaPlayer` may not resume playing the delayed audio. v0.3 released 5/17/2017: -* Added the `CapabilityAgent` base class that is used to build capability agent implementations. +* Added the `CapabilityAgent` base class that is used to build capability agent implementations. * Added the `ContextManager` class that allows multiple Capability Agents to store and access state. These events include `context`, which is used to communicate the state of each capability agent to AVS: * [`Recognize`](https://developer.amazon.com/public/solutions/alexa/alexa-voice-service/reference/speechrecognizer#recognize) - * [`PlayCommandIssued`](https://developer.amazon.com/public/solutions/alexa/alexa-voice-service/reference/playbackcontroller#playcommandissued) - * [`PauseCommandIssued`](https://developer.amazon.com/public/solutions/alexa/alexa-voice-service/reference/playbackcontroller#pausecommandissued) - * [`NextCommandIssued`](https://developer.amazon.com/public/solutions/alexa/alexa-voice-service/reference/playbackcontroller#nextcommandissued) - * [`PreviousCommandIssued`](https://developer.amazon.com/public/solutions/alexa/alexa-voice-service/reference/playbackcontroller#previouscommandissued) + * [`PlayCommandIssued`](https://developer.amazon.com/public/solutions/alexa/alexa-voice-service/reference/playbackcontroller#playcommandissued) + * [`PauseCommandIssued`](https://developer.amazon.com/public/solutions/alexa/alexa-voice-service/reference/playbackcontroller#pausecommandissued) + * [`NextCommandIssued`](https://developer.amazon.com/public/solutions/alexa/alexa-voice-service/reference/playbackcontroller#nextcommandissued) + * [`PreviousCommandIssued`](https://developer.amazon.com/public/solutions/alexa/alexa-voice-service/reference/playbackcontroller#previouscommandissued) * [`SynchronizeState`](https://developer.amazon.com/public/solutions/alexa/alexa-voice-service/reference/system#synchronizestate) - * [`ExceptionEncountered`](https://developer.amazon.com/public/solutions/alexa/alexa-voice-service/reference/system#exceptionencountered) -* Implemented the `SharedDataStream` (SDS) to asynchronously communicate data between a local reader and writer. -* Added `AudioInputProcessor` (AIP), an implementation of a `SpeechRecognizer` capability agent. -* Added the WakeWord Detector (WWD), which recognizes keywords in audio streams. v0.3 implements a wrapper for KITT.ai. + * [`ExceptionEncountered`](https://developer.amazon.com/public/solutions/alexa/alexa-voice-service/reference/system#exceptionencountered) +* Implemented the `SharedDataStream` (SDS) to asynchronously communicate data between a local reader and writer. +* Added `AudioInputProcessor` (AIP), an implementation of a `SpeechRecognizer` capability agent. +* Added the WakeWord Detector (WWD), which recognizes keywords in audio streams. v0.3 implements a wrapper for KITT.ai. * Added a new implementation of `AttachmentManager` and associated classes for use with SDS. * Updated the `ACL` to support asynchronously sending audio to AVS. @@ -422,7 +422,7 @@ The v0.2 interface for registering directive handlers (`DirectiveSequencer::setD * `DirectiveSequencerInterface::setDirectiveHandlers()` was replaced by `addDirectiveHandlers()` and `removeDirectiveHandlers()`. * `DirectiveHandlerInterface::shutdown()` was replaced with `onDeregistered()`. * `DirectiveHandlerInterface::preHandleDirective()` now takes a `std::unique_ptr` instead of a `std::shared_ptr` to `DirectiveHandlerResultInterface`. - * `DirectiveHandlerInterface::handleDirective()` now returns a bool indicating if the handler recognizes the `messageId`. + * `DirectiveHandlerInterface::handleDirective()` now returns a bool indicating if the handler recognizes the `messageId`. * Bug fixes: * ACL and AuthDelegate now require TLSv1.2. * `onDirective()` now sends `ExceptionEncountered` for unhandled directives. @@ -430,13 +430,13 @@ The v0.2 interface for registering directive handlers (`DirectiveSequencer::setD v0.2 updated 3/27/2017: * Added memory profiling for ACL and ADSL. See [**Appendix A**](#appendix-a-mempry-profile). -* Added command to build API documentation. +* Added command to build API documentation. v0.2 released 3/9/2017: * Alexa Client SDK v0.2 released. * Architecture diagram has been updated to include the ADSL and AMFL. * CMake build types and options have been updated. -* New documentation for libcurl optimization included. +* New documentation for libcurl optimization included. v0.1 released 2/10/2017: * Alexa Client SDK v0.1 released.