Recently, Amazon (AMZN  ) has revealed that it plans on introducing a feature that allows its AI voice assistant, Alexa, to play voices of dead family members or relatives. This feature was first introduced at Amazon's MARS conference, during which a video was displayed of Alexa reading a book aloud with the voice of a child's grandmother who had passed away.

Senior Vice President and head scientist for Alexa, Rohit Prasad, said this past Wednesday at the MARS conference that the primary purpose of this new feature is to accurately simulate human, lifelike qualities so as to properly demonstrate significant human emotions.

"These attributes have become even more important during the ongoing pandemic when so many of us have lost ones that we love," Prasad said in a statement. "While AI can't eliminate that pain of loss, it can definitely make their memories last."

Although Amazon has not decided whether or not it will make this feature more widely available, it has revealed that its systems are able to mimic human voices simply by listening to and picking up on just one minute of an audio recording of the dead relative.

According to Subbarao Kambhampati, professor of computer science at Arizona State University, though this feature might be beneficial to those who are missing loved ones, people still need to be cautious when it comes to understanding and trusting in what they hear.

"As creepy as it might sound, it's a good reminder that we can't trust our own ears in this day and age," Kambhampati commented. "But the sooner we get used to this concept, which is still strange to us right now, the better we will be."

And, though the concept of voice cloning can be quite interesting to consider, for many individuals, it still might raise a variety of ethical concerns.

"For people in grieving, this might actually help in the same way we look back and watch videos of the departed," Kambhampati remarked. "But it comes with serious ethical issues, like is it OK to do this without the deceased person's consent?"