Google Home vs. Amazon Echo – Developer’s Review

Smart speakers are a new class of gadget that uses voice-recognition and machine learning technology to provide access to information, news, and music. As developers may already know, some of these devices are programmable—for example, you can build “skills” for Amazon’s Alexa digital-assistant platform, which powers hardware such as the Echo. But is developing for these platforms a pleasant experience?

Amazon Echo

Amazon launched the Echo in mid-2015 in the United States, followed by rollouts in other countries; it followed that up with the Amazon Dot, a smaller device, and Amazon Tap, an ultra-portable version. Each proved a solid seller, and the e-commerce giant soon ramped up production on a version with a screen (the upcoming Echo Show). These devices’ “brain,” Alexa, has two parts: Alexa Skills and the Alexa Voice Service.

In simplest terms, an Alexa skill is best described as a mini-application invoked by a phrase prefixed by “Alexa.” Amazon has tried to make the development process as simple as possible, offering an Alexa Skills Kit that you can use to create a skill. Each skill features two components: an Intent Schema (a JSON structure) and Spoken Input Data (Sample Utterances, a structured text files that connects “intents,” and Custom Values, or values of specific terms, referenced in the indents).

There are over a dozen different ways to say the same thing, including Ask, Tell, Search, Talk To, Open, Launch, Start, Resume, Run, Load, Begin and Use – all combined with the invocation name and the intent. Things can get pretty complex in rapid order.

If the invocation name is “games pack” and the intent is “poker,” you might say: “Alexa, launch games pack and run poker.”

While Amazon’s documentation presents building for Alexa in a straightforward way, you really need to diagram the skill with care. For example, let’s say you want to build a skill that will order a taxi. The user needs to specify where they want to be picked up from, where they’re going when they want to be picked up, the number of passengers, and any special requests (disability-accessible, etc.). That’s a lot of dialog and responses, and it takes a lot of time to build a natural flow. If you don’t take that care, the skill will have incomplete functionality or result in a conversational “dead end,” and nobody will use it.

Amazon provides specialized APIs as part of Alexa’s skills: for example, the Smart Home Skill API (for lights, thermostats and so on), the Flash Briefing Skill API (for news flashes), and the Video Skill API (change TV channels, pause video playing, and so on). For these, you have to use AWS Lambda, and can write code in Java, Node.js, Python or C#.

For anything that falls outside of these specialist Skill APIs (such as taxi-ordering, to loop back to our previous example), you have to write custom code yourself; in these instances, you can either use AWS Lambda or provide your own web server. I suspect that most developers will go down the custom route, but if you don’t have time to write code yourself, you can use the Alexa Voice Service, as well as any of the skills available in the Amazon Skills Store.

Google Home

Google’s own digital-assistant hardware, Home, opened up to development in late 2016. Google Assistant, which runs on smartphones, powers Home. When it comes to voice apps, Assistant boasts a lot of similarities to Alexa.

With Google, Actions define your app’s external interface. There are two components: an intent, which is a simple messaging object that describes the action, and fulfillment, which is the web service that can fulfill the intent. Actions are defined in a JSON action package.

If you’re interested, start by heading over to the Google Developers’ portal for Actions. Create an Actions Project and an API A.I. agent that defines your intents. Once those are set up, provide fulfillment, which is code that is run on Firebase (much like AWS Lambda; both request and response messages are JSON).

You can either choose Cloud functions on Google Cloud or do your own web services so long as they support HTTPS and can parse JSON. Google provides a node.js client library as well as examples for deploying to Google App Engine and Heroku.

Each action package has one actions.intent.MAIN intent. This is the default action, and is specified in JSON. Here’s a useful example from the Eliza sample on Github: first, the main Intent handler, then one for raw input:

const mainIntentHandler = (app) => { console.log(‘MAIN intent triggered.’); const eliza = new Eliza(); app.ask(eliza.getInitial(), {elizaInstance: eliza}); }; const rawInputIntentHandler = (app) => { console.log(‘raw.input intent triggered.’); const eliza = new Eliza(); const invocationArg = app.getArgument(INVOCATION_ARGUMENT); const elizaReply = invocationArg ? eliza.transform(invocationArg) : eliza.getInitial(); app.ask(elizaReply, {elizaInstance: eliza}); };


Both Amazon and Google provide design documents for Voice UI and simulators, so you can really do a lot of research and experimentation before you actually build. (For Amazon, that functionality is available via Echosim; for Google Assistant, there’s an Actions Simulator.)

Thanks to all those resources, it’s easy to begin shaping out an idea for an action or skill, no matter which platform you choose; but voice commands are a deceptively difficult thing to get right, and developers could become snarled up unless they carefully diagram out functionality beforehand.

The game will only get more complicated later this year when Apple introduces the HomePod, which leverages Siri. That means developers who want to work on all available platforms will need to become familiar with yet another company’s software. Fortunately, there’s Sirikit, as well as documentation that breaks down Apple’s take on intents.

Welcome to the beginning of the future.

Please follow and like us:

One thought on “Google Home vs. Amazon Echo – Developer’s Review

  • March 29, 2019 at 12:11 pm

    Hey, how’s it going?

    I want to pass along some very important news that everyone needs to hear!

    In December of 2017, Donald Trump made history by recognizing Jerusalem as the captial of Israel. Why is this big news? Because by this the Jewish people of Israel are now able to press forward in bringing about the Third Temple prophesied in the Bible.

    The Jewish people deny Jesus as their Messiah and have stated that their Messiah has been identified and is waiting to be revealed. They say this man will rule the world under a one world religion called “spiritualism”.

    They even printed a coin to raise money for the Temple with Donald Trumps face on the front and with king Cyrus'(who built the second Temple) behind him. On the back of the coin is an image of the third Temple.

    The Bible says this false Messiah who seats himself in the Third Temple will be thee antichrist that will bring about the Great Tribulation, though the Jewish people believe he will bring about world peace. It will be a false peace for a period of time. You can watch interviews of Jewish Rabbi’s in Israel speaking of these things. They have their plans set in place. It is only years away!

    More importantly, the power that runs the world wants to put a RFID microchip in our body making us total slaves to them. This chip matches perfectly with the Mark of the Beast in the Bible, more specifically Revelation 13:16-18:

    He causes all, both small and great, rich and poor, free and slave, to receive a mark on their right hand or on their foreheads, and that no one may buy or sell except one who has the mark or the name of the beast, or the number of his name.

    Here is wisdom. Let him who has understanding calculate the number of the beast, for it is the number of a man: His number is 666.

    Referring to the last days, this could only be speaking of a cashless society, which we have yet to see, but are heading towards. Otherwise, we could still buy or sell without the mark amongst others if physical money was still currency. RFID microchip implant technology will be the future of a one world cashless society containing digital currency. It will be implanted in the right-hand or the forehead, and we cannot buy or sell without it! We must grow strong in Jesus. AT ALL COSTS, DO NOT TAKE IT!

    Then a third angel followed them, saying with a loud voice, “If anyone worships the beast and his image, and receives his mark on his forehead or on his hand, he himself shall also drink of the wine of the wrath of God, which is poured out full strength into the cup of His indignation. He shall be tormented with fire and brimstone in the presence of the holy angels and in the presence of the Lamb. And the smoke of their torment ascends forever and ever; and they have no rest day or night, who worship the beast and his image, and whoever receives the mark of his name.” (Revelation 14:9-11).

    People have been saying the end is coming for many years, but we need two key things. One, the Third Temple, and two, the technology for a cashless society to fulfill the prophecy of the Mark of the Beast.



Leave a Reply

Your email address will not be published. Required fields are marked *