Original text here from Patrice Bernard (LinkedIn)
For many businesses, especially in the highly cautious financial sector, the arrival of ChatGPT has sparked more concern than enthusiasm for its opportunities, to the point of an outright ban in leading institutions. Faced with the identified - and very real - risks, Gartner envisions a solution for a reasoned adoption of the underlying technology.
Beyond the restrictions on employees' free access to consumer-grade tools, whose motivations are not always clear (is the danger really greater than with conventional search engines?), large corporations are generally willing to experiment with generative AI products offered not only by startups like OpenAI but also by their long-standing suppliers, such as Microsoft or, to a lesser extent, Google.
However, most of them have serious reservations about deploying them, even to their own employees, without prior precautions because some critical questions remain unanswered with these generic platforms connected to their internal data sources. How can we ensure that the information presented to a user falls within the scope of their authorizations? How can we guarantee the quality of results and, especially, avoid the ever-possible "hallucinations"? Etc.
Subscribe to our newsletter:
According to Gartner, salvation will come from "generative micro-apps," specialized interfaces that act as intermediaries for standard engines, assuming a role as controllers of interactions. The person no longer freely queries artificial intelligence through chat; instead, they are offered only pre-defined "prompts," whose relevance and security have been verified, and whose responses, while still dynamic, can also be filtered in case of deviation.
Certainly, the principle, as described, fulfills its role perfectly in protecting against anticipated threats... but at what cost? By limiting the possibilities for queries, these "micro-apps" ultimately behave like restricted search engines, with at most an additional capacity for natural language processing. This is actually the example given by the Gartner analyst, of an author (perhaps himself) with a vast reference library at his disposal in support of the article he is writing.
This approach has a major flaw: it disregards the intrinsic value of an individual's creativity in how they engage with AI. The flaw is all the more significant as it reveals a contradiction when Nader Henein explains, in a sense, that the scope of the automaton's abilities must be reduced in order to justify the contribution of human intelligence. Why on earth should human intelligence be acceptable for addressing the submitted problem but not for inquiring about a robot's opinion?
In summary, I believe that "generative micro-apps" may be a short-term compromised option, but they will never replace the autonomous and unencumbered use of AI platforms. The only viable path for organizations that truly want to capitalize on their potential while managing the associated risks will necessarily involve training their employees... and trusting their ability to exercise judgment in their daily use.