ELVIS Act —Breaking down the Ensuring Likeness, Voice, and the Image Security Act of 2024. Scott Hervey and James Kachmar from Weintraub Tobin discuss its impact on AI audio technology and how it protects musicians in the next installment of “The Briefing.”
Watch this episode on the Weintraub YouTube channel here or listen to this podcast episode here.
Show Notes:
Scott: Tennessee’s ELVIS Act isn’t what you think. The acronym stands for Ensuring Likeness, Voice, and the Image Security Act of 2024. It’s about protecting a musician’s voice from AI clones. The bill was signed into law on March 21st, 2024, amid a growing concern by the music industry and musicians over AI soundalikes and deep fakes. I’m Scott Hervey from Weintraub Tobin, and I’m joined again by my partner, James Kachmar, to talk about this bill and its impact on the nascent AI audio space in this episode of “The Briefing” by Weintraub Tobin. James, welcome back to “The Briefing.”
James: Thanks, Scott.
Scott: So, James, let’s dive right into this bill and see what it does and doesn’t do. So, this bill amends Tennessee’s existing right of publicity statutes. Tennessee’s existing law has previously provided that individuals, or in the case of a deceased individual, their estate, have a proprietary right in the use of that person’s name, photograph, or likeness in any medium, in any manner. Now, one could probably have argued that likeness included voice, but this bill now makes it clear that a person’s voice is among the personal property rights this statute now protects. James: Right, Scott. And in the bill, voice is defined as a sound in a medium that is readily identifiable and attributable to a particular individual, regardless of whether the sound contains the actual voice or a simulation of the voice of the individual. So essentially, a soundalike.
Scott: That’s right. So, let’s talk about what this bill protects against. Tennessee’s right of publicity statute now protects against the use of a person’s name, photograph, voice, or likeness for the purpose of advertising products, merchandise, goods or services, or for the purposes of fundraising, solicitation of donations, purchases of products, merchandise, goods, or services. The bill also adds new language which provides that a person will be civilly liable If they publish, perform, distribute, transmit, or otherwise make available to the public an individual’s voice or likeness with knowledge that the use of the voice or likeness was not authorized by the individual.
James: So, Scott, I assume that this bill is going to put a target on AI voice companies for possible lawsuits?
Scott: Yeah, it does. It absolutely does. The bill provides for civil liability for any person that distributes, transmits, or otherwise makes available an algorithm, software tool, or other technology, service, or device, the primary purpose or function of which is the production of an individual’s photograph, voice or likeness without authorization from the individual.
James: Scott, do I understand the bill correctly that not only the individual performer will have a cause of action, but it also gives record labels a right to sue for violations?
Scott: Yeah, absolutely. That’s right. The bill adds a paragraph to the section that discusses remedies for violations of the section. This new paragraph states that well where a person has entered into a contract for an individual’s exclusive personal services as a recording artist or an exclusive license to distribute sound recordings that capture an individual’s audio performances, an action to enforce the rights set forth, and this part may be brought by the person or individual. So, in other words, record labels.
James: I’m sure there’s a lot of them in Nashville, Tennessee. Scott, even though the statute does not appear to be limited to commercial advertising, previous federal court decisions have limited its scope to the advertising or promotional context. And have excluded performances, sports broadcasts, websites, and creative works from its reach. The new language from this bill seems to also target creative works, such as the fake Drake AI song, and other AI soundalike recordings.
James: I agree with you, and I think that this may be problematic.
Scott: In what way?
James: Well, if an artist or a recording label attempts to sue under the statute for an AI soundalike recording that is a creative work, such as the AI Johnny Cash cover of Barbie Girl, well, I think that may run afoul of Section 114B of the Copyright Act. Now, Section 114B permits soundalikes. A publication by the US Copyright Office specifically says that, quote, Under US copyright law, the exclusive rights and sound recordings do not extend to making independently recorded soundalike recordings. If that isn’t clear enough, the notes to Section 114 by the House Judiciary Committee provide as follows quote: Section B of Section 114 makes clear that statutory protection for sound recordings extends only to the particular sounds of which the recording consists and would not prevent a separate recording of another performance in which those sounds are imitated. Mere imitation of a recorded performance would not constitute a copyright infringement, even where one performer deliberately sets out to simulate another’s performance as exactly as possible, end quote.
Scott: So, when the inevitable lawsuits start to get filed as a result of this new law, do you think a potential defendant has a good preemption argument?
James: I do think they have a good preemption argument.
Scott: I guess we’ll just have to wait and see and assume we won’t have to wait too long for that.
James: I conclude James, that you’re correct in that. Thank you for listening to this episode of “The Briefing.” We hope you enjoyed this episode. If you did, please remember to subscribe, leave us a review, and share this episode with your friends and colleagues. If you have any questions about the topics we covered today, please leave us a comment.
Podcast: Play in new window | Download