Google requested assistance from its employees to train its competitive product to the famous chatbot ChatGPT to provide accurate answers, after incorrect answers led to a wave of widespread criticism and a loss of $100 billion in its market value.
The executive officials of the American tech giant are aware that the intelligent robot, Bard, is not always accurate in its responses to search requests. Now it seems they will share the burden with their employees to fix the incorrect answers.
On Wednesday, Prabhakar Raghavan, the Vice President of Google Search, requested assistance from employees through an email to ensure that Bard provides correct answers. He encouraged employees to rewrite answers on topics they understand well. The document stated, "Bard learns better by example, so spending time to craft a thoughtful response will go a long way in helping us improve."
The statements made by Raghavan were also reiterated by Sundar Pichai, the CEO of Google, who asked employees to spend two to four hours of their time with Bard, acknowledging that "this will be a long journey for everyone.
Rajavan stated that while the technology is exciting, it is still in its early days. We feel a great responsibility to implement it correctly, and your participation in the pilot application will help accelerate model training and test its loading capacity.
Google had announced its smart chatbot (Bard) last week, but a series of errors committed by the robot led to a decrease in the company's stock price by nearly 9 percent. Employees criticized the launch due to the unfortunate incidents, and among themselves described the robot as "hasty," "failed," and "short-sighted to the point of being laughable."
To try to eliminate the mistakes made by artificial intelligence, company leaders rely on human knowledge. The document sent to employees included a "do's and don'ts" section, where Google provides guidelines for what should be considered before training (Bard).
In the "do" section, Google asked employees to maintain "polite and normal responses that are easy to deal with," and to use "the speaker's perspective" while preserving a "neutral and non-arrogant tone." In the "don't" section, Google instructed employees not to engage in negative stereotyping based on race, nationality, gender, age, sexual orientation, political ideology, or location.
Google also instructed employees not to describe (Bard) as a person, express emotion, or claim to have human-like experiences. They asked them to refrain from approving the answers provided by the robot, which contain "legal, medical, and financial advice" or are offensive and hurtful.
To encourage employees to participate in training (Bard) and provide feedback, Rajavan said that contributors will receive a Moma Badge that displays the internal employee profiles.