When Google took the stage on October 4th to unveil the highly anticipated Pixel and Pixel XL smartphones, many users were thrilled at the apparent renewed attention that Google was giving to their hardware line-up.
However, if one were to pay attention to the plethora of other products and services announced at the Google Event, it’s clear that Google’s true focus was aimed towards Artificial Intelligence (AI), with its new Google Assistant at the forefront.The software addition to Google’s product suite was a culmination of their existing work on Search technologies, Knowledge Graphs, Natural Language Processing, and Big Data analysis. One of the most enticing aspects of Google Assistant was its ability to contextually understand situations and respond appropriately, hence taking on an “Assistant” role.
However, a product like Google Assistant can only hold a conversation so long as the answer is readily available based on simplistic Google Search queries. Eventually, Google Assistant would fail to understand more complex queries, and consumers would arrive on Google search’s landing page to manually search for an answer. Forcing a user to leave the confines of AI-based understanding and contextualization represents a failure in Google Assistant, one that severely limits its potential as an Assistant. An Assistant that forces the user to manually query Google Search is basically a glorified Voice Search.
This is where Actions on Google comes in. Actions on Google is a Developer Platform that will allow third party developers to create conversational reply-based actions on Google Assistant. What was previously impossible for Google Search to answer can now be handled by a third-party Assistant plug-in, further filling in the gaps of Assistant’s functionality. As Google had promised during the October 4th Google Event, Actions on Google is being launched right on schedule.
These actions will be available on any platform that supports Google Assistant exists, which currently includes Google Home, Google Allo, and the Google Pixel and Pixel XL. However, today’s launch is centered around Google Home, with integrations for Allo and Pixel at a later date.
We’ll continue to add more platform capabilities over time, including the ability to make your integrations available across the various Assistant surfaces like Pixel phones and Google Allo
To get started as a developer, visit the Actions on Google website. In addition to the Actions API, Google has also worked with a few development partners to provide conversational development tools like API.AI and GupShup, analytical tools like DashBot and VoiceLabs, and consulting companies like Notify.IO, Assist, Witling and Spoken Layer. Furthermore, developers can get started by accessing samples and Voice User Interface resources as well as integrations from early access partners when they roll out.
In the future, Google is also planning to enable support for purchases and bookings, along with “deeper Assistant integrations across verticals”. If you wish to make use of these upcoming features as a developer, you will need to register for Google’s early access partner program.
How does it work?
ArsTechnica and The Verge had an early hands-on with the platform. Currently, only Conversational Actions (requiring multiple back-and-forth conversations) are demoed, while the other “Direct Actions” that interact with Assistant’s IoT-hub nature (like switching on a light) are not yet available.
Actions created with the API will be triggered by using certain distinct keywords. These voice triggers will then switch out Google Assistant with a new chat personality created by the third-party developer. The third-party chat bot will be able to handle commands that are not available to Google – such as ordering going through the process of calling an Uber. While in conversation with this new chat bot, standard Google Assistant commands will remain inaccessible to the end user until they exit the interaction or let the conversation time out.
End users will not have to configure anything on their end to install or utilize these bots, as everything will be enabled from a server side switch. The Verge mentions that Google will not be creating an “Action Store”, effectively making all actions available to all users regardless of their input. Google will be involved with curating the list of keywords that developers can use to invoke their chat bot to prevent conflicting commands. There will be app-store-like policies in place that will work to prevent keyword camping (one company using another’s name as their keyword) and to safeguard important generic keywords such as “shopping”. Full policies and guidelines will eventually be published to maintain a level of transparency in the process.
With the lack of a “Store”, there are questions that remain unanswered regarding the discovery of new services on the Assistant platform. Ultimately, there are two things we are currently concerned about. First, how will Google provide equal or equitable access to lesser used services? Next, how will Google inform the user of the existence of every new keyword? These are immediate concerns that we do not see an answer to in the initial press release, but we will be waiting to see how Google is planning to address these issues.
As an entirely new product segment with the only decent competition coming from the likes of Amazon Echo and its Alexa AI, Google Assistant has a lot of catching up to do. The initial difficulty will be in creating a product that an end user would want, and API support is the first step towards attracting the right services to a young platform.
What are your thoughts on Actions on Google platform? Let us know in the comments below!
from xda-developers http://ift.tt/2hb9F4T
via IFTTT
Aucun commentaire:
Enregistrer un commentaire