Lana Del Rabies opens up on the visual and sonic influences of ‘STREGA BEATA’
]
In our latest Artist Spotlight, we chat to the enigmatic, and intense Lana Del Rabies about the influences and ideas that shape her forthcoming album, ‘STREGA BEATA’.
Hello, how are you?
I’m a bit tired and always concerned about the state of the world but also better than ever in some regard!
How important is the image and visual aesthetic of Lana Del Rabies?
I would say visual aesthetic is more important than “image”. I’ve always been a bit at odds with the idea of self-branding and being a persona, especially with how necessary it is these days. It’s not that I don’t understand how to do it; I just have never been inspired by one-dimensional art and ideas (or people for that matter) and I actively seek out things that can’t be neatly categorized. That goes for a lot of the work I do too.
In terms of visual aesthetics, I was a visual artist before I started working with audio and music, so for me, all mediums are just different formats to express an idea. With every release, I’m also very involved with brainstorming, planning, and executing all the visual elements that go with it – photos, videos, design, etc. I tend to do most of it myself, but with STREGA BEATA, this was an opportunity to bring in other artists/friends into other parts of the process. I’ve been collaborating with a lot of different people on these elements for over 2 years, and it’s been satisfying to finally see it come together.
How do you define success now, and has that changed throughout your career?
I will admit I used to heavily internalize what other people thought of me and I have also had growing pains in my career with being self-critical and overly focused on how “successful” certain accomplishments were. I think when you over-identify yourself with your work, it’s easy to become fixated on if you are “enough” as an artist- or even a person. I used to take “setbacks” really personally. I don’t think being motivated by achievements alone is ever really satisfying and it was something I needed to learn.
I think success is a balance between actually doing your best to create and execute an idea, and giving yourself the best shot at getting it out there and giving something to other people. I think the actual goal behind wanting to achieve something with your work should contribute something valuable to your quality of life and/or get you in an optimal position to give something to other people, whether it’s through your work or the ability to give a platform.
What about legacy, both on an artistic and personal level?
I think most people want to be remembered for something they contributed to the world. For me, I think legacy ties into my idea of success because I want my work to give something meaningful to others for years, whether it’s catharsis or inspiration, or both. I also would hope to be in a position where I have the platform to help other people get their work and what they have to offer out there. Supporting good and talented people is to me the point of a legacy.
‘Forgive’ is a beautiful way to end a very intense sonic record, can you talk me through the development of that one?
That’s actually the heaviest (as in emotionally) track on the record for me, despite it being one of the least “heavy” tracks sonically that I’ve ever done. The entire process of writing STREGA BEATA was a huge undertaking that took 3 years longer than I intended to finish and release.
It started as a concept record using mythology and religious stories to create a metaphor about how overwhelming it is to exist in a time where it feels like humanity may be on the precipice of irrevocably destroying itself and the environment, and how we have access to witnessing all of that destruction in real-time, all at once, almost the way a god or deity would. There were other things I was processing as well, like the way we tend to be so careless and cruel with children as a society. At the beginning of the writing process, I was working a job involved with child welfare, and there were a lot of things happening politically in the U.S. that were making that more of a public conversation. For me, there was a lot of grief in the idea that we as a species would have such an incredible world in front of us, and we are potentially reaching the endpoint. Everything that was already there was only made more complicated by the pandemic and death in my personal life.
Forgive is a track that is the acceptance of grief and the inevitability of death or reaching an endpoint. Whether there truly is a man-made apocalypse coming, something personal is lost in a painful way, or you are on the edge of death, there will be that moment where you know you no longer have control of where you are now and you have to find peace in the transition. That sounds morbid, and I’m not advocating for apathy, but what I mean is that everything ends, and that is the only thing we can ever know in this life. On a personal level, the song is about needing to Forgive myself for being in dark places in my life, and knowing that when I die, that I can look back and know I made the effort to do better.
‘Mother’ is particularly haunting as well, is it important for you to have that space and atmosphere that builds in your songs?
I think that playing with atmosphere and space is one of the most exciting things about making music. I find myself most drawn to music that makes an effort to create movement either through transitions or density, especially if they make an effort to draw from different genres to create something compelling. I think it is a reason why I am not easy to categorize with what I do, and why I don’t know where to begin when people ask me about what my music is like if they haven’t heard it. I respect artists who master a single genre and can write to a specific structure, but it’s just never been something I’ve been interested in. I always want to push what I’m working on to be something I haven’t heard before.
What are the biggest challenges of taking the project out live, it seems to be quite intense for you as a performer, as in, it seems to take you to a “different” place or space?
The biggest challenge is honestly remembering and executing all my cues for playing the tracks properly. The physical, cathartic, performance element is actually the easy part for me – that’s intuitive and freeing. It can be physically and emotionally demanding, but it is a huge reason I love playing live shows. There is nothing else like it in day-to-day life, and I love being in an honest and vulnerable space with others that way. It’s that “different” place that I hope to bring myself and others into. I just have to also keep the disciplined part of myself engaged so that I can actually play my music in the process, which is actually more strenuous.
From listening, and watching the live performance, would it be accurate to say that LDR is very much catharsis for you, and a way to process these various challenging experiences?
I honestly don’t know where I would be right now if I had not started this project at the point when I did – it really got me through some incredibly dark periods of my life. The level of intensity that went into this project during that time is part of why I wanted to start over with a new name a few years ago. I honestly wanted a fresh start without the baggage of my past. Things obviously went another way, and I was surprised to find that a lot of people cared about the LDR project after I tried to change it. I’m now glad that it’s still a part of me and that it has evolved with me.
Is there anything that I’ve missed that you would like to add?
Not really, just that I’m grateful this record is finally coming out, grateful that other people are excited for it, and that I am very ready for whatever is next.
Finally, what is a “gem” of advice that you have received, musically or otherwise that you carry with you each day?
Focus your energy on what is in your control.
Thank you so much for the time!!
Thank you!
—
Lana Del Rabies forthcoming album, ‘STREGA BEATA’, is set for release March 17th, 2023, on 2XLP/CD/Digital through Gilgongo Records.
After a brief hiatus, Sam An‘s dark electronic, genre-bridging solo project under the semi-notorious moniker Lana Del Rabies makes a potent return. ‘STREGA BEATA’, which loosely translates to “Blessed Witch,” is an apocalyptic myth told through dense, textural compositions pulling from the vast corners of dark genre music. The title’s meaning is appropriate for the existential, cathartic, and otherworldly themes explored.
The album begins with the ominous scorn of the all-knowing, intangible creator in “Prayers of Consequence,” with the lyric “You don’t know, what I’ll show,” coming in after the thundering kick of hypnotic, ritualistic drums and a delicate looped piano sample over a deep bass drone. The song moves into a layered, rhythmic warning from many voices, about the consequences of brutality and apathy, and the saga of a deity grappling with a man-made apocalypse.
The accompanying video is a collaborative effort between 5 American artists residing the southwest to bring the mystical and allegorical imagery of the song to life. Louise Saafi, Adam Cooper-Terán, Abraham Cooper, and M. Dean Bridges all collaborated to film various rituals performed by Lana Del Rabies over a span of 2 years in different landscapes of Arizona. A majority of the imagery was conceived between Sam An (Lana Del Rabies) and Louise Saafi in 2021 to depict and embody the multifaceted “Mother” creator figure that narrates STREGA BEATA. All of the artists involved collaborated to find environments that would be the setting of these rituals and shot their perspective of the events. These rituals are intended to transmute grief around violence and loss, in and of the natural world. The final product is a collaborative phantasmal vision between An and Cooper-Terán’s editing.
Listen to “Prayers of Consequence”: https://youtu.be/nW5Jl96S8vc
Other streaming options: https://songwhip.com/lanadelrabies/prayers-of-consequence
“There is a specific type of grief that comes from witnessing the brutality of what humans are capable of towards each other. If that grief goes unprocessed, it is doomed to also manifest as brutality,” says Phoenix-based musician and producer Sam An. “There have been many moments while writing this record that I was grieving through an event that was personal or worldwide, and then another sudden, tragic event would happen. I needed to re-evaluate what I wanted to say because my world and perspective had shifted again. Trying to keep up with tragedy can make one feel helpless, and one way to feel in control is through acts of destruction. My personal impulse towards destruction nearly ended this project.”
Written, produced, and almost entirely performed by An, she says, “I started writing the third record immediately after the promotion of Shadow World. I had recently become sober and was mending myself not just from experiences of abuse and trauma that occurred throughout my life, but processing how I had abused the people closest to me in my most out of control moments. I also at the time had a job involved in child welfare. The not-new and on-going the violent separation and exploitation of children at the US/Mexico border was everywhere in the media, and directly affecting the community around me. Brutality often feels like it is everywhere with how we experience the world now, but for the first time, I could not numb myself from it. That awareness only extended through the world events we all have had to live through these last few years, and the personal loss that came from that. I began working through the pain of abuse, death, and grief, through an actual story in my work”
This is where the mythical allegory in ‘STREGA BEATA’ begins – the record is told through the evolving perspective of a cryptic and obscure “Mother” creator figure, specifically echoing the mother and crone goddess archetypes. This figure is vocalized through LDR’s eerie harmonies, spoken words, pleading howls, and frenetic screams aimed towards another archetype, the chosen one. These vocalizations sometimes hover above trip-hop low tempos and unnerving atmospheres (“Mourning,” “Hallowed is the Earth”), sink through vintage horror movie darkwave synths (“Master,” “Apocalypse Fatigue”), weave through art metal distortions and relentless drone bass tones (Mother), or blast through unconventional industrial rhythms (“Prayers of Consequence,” “A Plague”).
Ultimately, ‘STREGA BEATA’ is a deep exploration of the cyclical consequences of unhealed grief for us and our world. “I have used this story to process the harm that power enables. There is a fine line between empowerment and using power to cope with the feeling of losing control. I thought a lot about the power of the natural world, particularly storms, while writing this. Storms are created by opposing pressures, causing destruction. It’s interesting how some ideologies will co-opt storms in their iconographies and language, but don’t acknowledge that storms are events that cannot be controlled or contained. The way out of pain is by reconciling what you cannot control and taking ownership of what you can. It is the only way to stop cycles of destruction. Sometimes you must complete another cycle before realizing you are ready to end it for yourself and others. I feel this record is the end of a destructive cycle for me, and why I returned to myself as this project.”
More about Lana Del Rabies:
Lana Del Rabies is the solo dark electronic, genre-bridging solo project of Phoenix based musician, producer and multimedia artist Sam An. With origins as an experimental project that re-contextualized the more ominous aspects of modern pop music made by women (like that of Lana Del Rey), Lana Del Rabies’ music incorporates industrial, gothic noise and metal, with experimental, darkwave and ambient elements. Her work thematically embodies discordant spaces between the occult and the political, personal trauma and collective grief, and brutality and benediction.
Her first two records, In The End I Am A Beast (2016) and Shadow World (2018) were released by Los Angeles based label Deathbomb Arc.
Track list:
-
Prayers of Consequence
-
A Plague
-
Master
-
Mother
-
Grace The Teacher
-
Mourning
-
Hallowed is the Earth
-
Reckoning
-
Apocalypse Fatigue
-
Forgive
شات سونيك ChatSonic الذكاء الاصطناعي منافس 4.0 chatgpt الجديد
]
Voicemod tools up with $14.5M to ride the generative AI (sonic)boom
]
The first thing we ask Voicemod‘s CEO and co-founder, Jamie Bosch, when he picks up the phone to talk about a new funding round is not something we’re accustomed to asking — but our question may become the norm in the generative AI future that’s fast-flying at us: Is this your real voice?
Bosch’s startup has been fiddling with audio effects for almost a decade, playing in the field of digital signal processing (DSP) — where its early focus was on creating fun ‘sound emoji’ effects and reactions for gamers to spice up their voice chats. And gamers do remain its main user-base (for now). But the audio field is being charged by developments in AI — which Voicemod’s team is hoping will lead to whole new use-cases and many more users for its tools.
So where DSP technology was about applying effects to a person’s (real) voice, developments in artificial intelligence are enabling startups like Voicemod to offer tools to create entirely synthesized (unreal) voices. And even the ability for users to ‘wear’ these voices in real-time — so they can speak with a voice that isn’t theirs. Think of it as the audio equivalent of a Snapchat lens or TikTok’s viral teenage filter or Reface’s celebrity face-swaps.
AI voice can even enable voice-shifting into another person’s (real) voice. And not just for talking about the weather or shooting the shit. But for what’s known as sing-to-sing voice conversion. Meaning you could get to sing in someone else’s voice — supercharging your karaoke game, say, by singing Bohemian Rhapsody as literally the voice of Freddie Mercury. And even switching between Mercury, May and Taylor, for the full mock opera effect if you have enough trained AI models (and microphones) on hand. Mamma-mia!
Artificial intelligence makes all this possible — even if legal and ethical questions may create pause for thought about rushing to unleash real-time voice-shifting upon a world that still relies plenty upon fixed identities. (Banks pushing customers to record ‘a unique voiceprint’ to use as a password definitely need to sit up and start listening.)
Voicemod acquired another audio effects startup last year, called Voctro Labs, whose technology Bosch says it’s working to blend with its own to create an amped up hybrid platform. The combo has already allowed it to expand what it offers — launching a text-to-song feature last December which lets you turn your own lyrics into a vocal composition using generative AI. He tells us more is on the way — including the aforementioned sing-to-sing feature.
Voctro’s tech may be familiar as it was involved in the development of a voice clone of musician Holly Herndon which appeared in a viral Ted Talk last year — in which her AI voice could be heard duetting with another musician (Pher)’s real voice in real-time. Which, well, if you haven’t already seen it is quite the visual-audio spectacle, as well as being a mouthful to explain. It’s also a taster of what Voicemod has coming to a keyboard near you.
“We’re definitely going to launch more products and more ways for people to express themselves with the generative AI technology,” Bosch tells us. “Not all Voctro Labs’ technologies are related to music — but they have a lot of technology related to singing, from this text-to-song technology to sing-to-sing technology in real time. So we have a lot of new projects and new products of upcoming.
“We are going to strengthen our speech-to-speech AI real-time technology, because we are basically merging our technology with their technology. We’re basically creating an hybrid technology that will be better than ours — or there’s a mix of both… [So their sing-to-sing technology will be] combined with our DSP technology — that we could use to do autotune. So we could potentially help artists with their voice and on the tone. And so this is, this is gonna be really, really interesting.”
As well as providing direct-to-consumer/creator audio tools, it offers its technologies via SDK and APIs for third parties to integrate into their own products, from games and apps to hardware. So it’s set up to distribute its tech across the gamer-creator ecosystem and have demand come find it.
Generative AI-powered disruption in audio of course mirrors (in a non-exact fairground ‘crazy mirror’ kind of a way) developments we’re seeing happen elsewhere: Visually, to graphics and illustration, as a result of deep learning and the advent of prompt-based image generation interfaces (such as DALL-E and Stable Diffusion). Also to the written word, through the large language models that underpin generative AI chatbots like ChatGPT that can produce song lyrics or a whole essay on demand. And, indeed, in the case of musical composition — where Google recently showed off a prompt-based generative AI song composer which can apparently produce arrangements that match the musical vibe you describe (although it said it’s not releasing that particular generative AI model — but surely someone else will).
It’s clear that AI is bending the rules of what it’s possible for a single person to create. And, well, as with freedom, the open concept, this is both thrilling and terrifying. Because, it’s what you do with it that counts.
The coming years are going to be all about finding out what people do with such powerful AI tools at their fingertips.
Voicemod is positioning itself to ride this wave by building a toolbox for creators to survive and thrive in a reality-bending future and across a range of use-cases — hence it’s talking in terms of sonic identity and voice avatars for the social metaverse (at the future-gaze-y end) but also just helping you sound your sparkling best on a work Zoom call. So a sort of audio make-up as it were. Apply as needed.
“Now suddenly everyone can become a creator,” predicts Bosch of the generative AI boon. “Everyone can come, basically, with no skill set. Or with no learnings on how to really craft those audios. They will be able to actually create those pieces of music. Songs. And this eventually evolves into into — probably — even voices. So the ability to create voices.”
“This could potentially be something really viral for platforms like TikTok, or YouTube Shorts or Instagram… And this could eventually evolve into things like karaoke, for example. And be, I don’t know, part of game consoles, or things like that, for people to use this to entertain. And, if we go a step further — and it’s the technology getting better and better as we think it will be — this could potentially be a professional tool for people who want to create music. Or for people who want to create voices for movies or voices for games characters.
“We have a strong belief in user-generated content, and we are building tools for our users to start creating sounds and creating voices. And we will be putting technology in the hands of the users to create those [sounds]. And, eventually in the future, hopefully, they will go even to a professional level.”
So while — currently — in order for the startup to synthesize a whole voice it does still involve a team of sound engineers and designers, Bosch suggests generative AI will put that power in the hands of the individual — and it’ll happen soon; “in the near future”.
“I don’t know if we’ll be prompting — now we’re in this wave of everything is done through prompts — I’m not sure if that will be the way or it will be more tools that will have AI technology embedded and we have user experiences that will make things a lot easier,” he adds. “But definitely what I see from generative AI in the audience but also in the management phase is that suddenly everyone’s can come become a creator, which I think is really interesting.”
The birth of AI voice may not sound like amazing news for the employment prospects of sound engineers and designers (albeit, tech advances may simply create new requirements that just shift where their expertise is needed). But Bosch reckons that voice actors, at least, will still have a key role to play — emoting for AI. Since robot voices aren’t good at getting the pitch and intonation, or indeed emotion, right. It’s a voice clone without a soul, basically. (Or as Nick Cave might put it, AI voice lacks ‘its own blood, its own struggle, its own suffering’ — it lacks humanness.)
“I think that you will always need a human factor in your sample with these voices,” suggests Bosch. “You could have the best voice — of even a famous person — but what really comes is the impression. You still need a human to do the cadence on the words. You still need a human to do the rhythm, the tone. So [it’s not just that] I can speak normally and I will sound like a famous person — no, you don’t — you still need to act a little bit. So… I think human factor for expression is key.”
Might generative AI not be able to be learn to emote as well, with the right human data-sets — and further dial up its mimickry so as to make us laugh or cry or love or hate on-demand too?
“Yeah. Well, we will see,” responds Bosch. “I’m not sure. I mean, as of today, for me AI is a tool to be used by humans. But yeah, we don’t know where this is going to evolve.”
Voicemod is gearing up for whatever phonic crazyiness lies ahead with a fresh tranche of funding. The 2014-founded startup has been revenue generating for years, via pro versions of its tools — its main product, Voicemod for Desktop, has had more than 40 million downloads to-date, while Bosch says it has 3.3 million monthly active users — but it’s just closed $14.5 million in expansion funding, following an $8M Series A back in summer 2020. Madrid-based Kfund’s growth fund Leadwind, led the round, with participation from Minifund (Eros Resmini former CMO at Discord) and Bitkraft Ventures.
“We’re super excited by what generative AI can do to all creative industries and more specifically audio, especially when it comes to enhancing and augmenting the job that creative people already do,” Jamie Novoa, partner at Kfund, tells TechCrunch. “In the past few months there’s been an explosion in generative AI in general and more specifically in audio but we think this is a phenomenon that’s just starting.
“What many of the cool technologies being launched to market lack are concrete and scalable business models attached to them, and Voicemod differentiates itself from the pack by having built a product used by millions of people on a daily basis and with significant revenue traction. We’re super excited about what Jaime and the rest of the Voicemod team have in the pipeline and what’s to come.”
Voicemod says the extra funds will be used to enhance the development of its real-time AI voice identity capabilities — and dial up its proposition for Gen Z, gamers, content creators, and professionals of all skill levels wanting tools to help them express themselves vocally in digital spaces.
Per Bosch, part of the reason it’s taking more funding now relates to the acquisition of Voctro Labs. Beyond that, he says it’s about making the most of the opportunities sparking off the Cambrian explosion in generative AI tools.
“We are in the middle of tremendous revolution in AI,” he says. “We want to be well funding in order to be able to develop technology but also to be able to deliver technology to users. So I think one of our competitive advantages is that we already have the market and the traction and we basically are able to put this in the hands of the users. And I want to make sure to have enough runway, also due to market conditions, to be able to put all of this in place. So it will be mainly focused… on building the next generation AI technology and putting it in the hands of the users and also building these creation tools for the users to create content.”
The first new tool will be landing next month — with a launch of Voicemod’s desktop product on macOS (currently it’s PC only). The goal is to evolve into a multi-platform product spanning all devices. “We’re also working on a creation tool mobile app that hopefully will see the light towards the beginning of next quarter. And, and yeah, some more stuff to come, hopefully,” Bosch adds.
He also tells us the startup is working on a watermarking technology which it hopes to launch in Q2 this year — to give platforms a way to be able to spot AI-generated voices in the wild.
Such a feature is likely to be a vital tool to counter all the possible negative use-cases (scams, fraud, manipulation, abuse, bullying, trolling etc etc) one could imagine humans coming up with for voice-shifting tools that let you sound exactly like someone you’re not.
“It’s an algorithm to watermark the audio,” explains Bosch. “Moderation is is complicated because it really changes depending on the space… on which are the platforms where the audio is used — so we believe that the channel is the one that should own that moderation and what we are doing is we will be providing this watermarking system in order for them to be able to know if the audio is created via synthetic voice or is created by a real voice.”
“Every single new technology can be used for for the good or for the bad,” he adds. “So we are of course putting some technology some tools in place to be able to have more control around a misuse of this technology.”
On questions of licensing for training data, IP issues here are currently a grey area — as the law hasn’t caught up with developments in AI (let alone generative AI). That means startups operating in the space have to consider whether to make the most of total legal freedom to do whatever they want (and hope expensive consequences don’t come clanging down on them in short order), or tread more carefully and thoughtfully. (Other startups in the space include the likes of Voice AI, Koe and ElevenLabs.)
Bosch claims Voicemod is taking the latter approach — using (paid) voice actors to build up data-sets to train and hone its AI models. If it wants to make use of some original content he says the team will go to the IP provider and negotiate — and figure out what kind of licensing terms they’d be up for. (The generative AI boom is also a crazy-thrilling time to be an IP lawyer, clearly.)
“We are basically pioneering here,” he adds. “So a lot of things are without laws yet so we were trying to stick to our values, basically, and try to do the right thing. That’s our approach on the data [side]. But yeah, you’re completely, right — there’s no ‘legal attachment’ to your voice, as of today… We own our fingerprint. You don’t own, like, whatever the fingerprint of your voice [is]. As of today.
“It sounds a little bit like science fiction but maybe, in the future, we will ‘own’ something related to our voice.”
For the record, Bosch was talking to me with his actual voice. The company’s real-time voice-shifting technology doesn’t yet work over mobile. But he says that’s coming too. So buckle up: The synthesized future is gonna be a screaming wild ride.
12 AI tools you need to know about and use
]
12 AI tools you need to know about and use
Chat Gpt -
The year has kicked off with cries that Terminator’s Skynet has risen and become aware. That’s how many are feeling today with the sheer amount of artificial intelligence (AI) tools that have grown in popularity overnight this month.
This is no doubt the year of AI tools, and the tools have such a diverse amount of uses that people have already started seeing how their jobs will either get a major boost in how they operate or how their jobs will be made redundant within the next few years, and it’s time to start learning new skills.
I’ve been testing out a number of the new AI tools and it’s amazing just how much help they can be and the power that regular folks like us now have at our fingertips. The key now is going to be for people to learn how to use these tools because they are going to become one of the greatest assets for us this year.
I’ve had friends who have said they are using the tools to create all of their work e-mails, I’ve seen folks dump entire podcast transcripts or news articles into the AI tools and ask them to summarise it all into its main points for easy consumption. What is impressive about it all is that it does almost anything you ask it to do within seconds.
When you go over the output from the tools, you will realise you may need to make some minor tweaks but everything you receive is pretty much 95 per cent ready to go for you. The scary thing is that AI tools are continuously learning and they are going to be light years better in the coming months than what they currently are today. This is why you need to get on board and start to learn them as best as you can.
I’ve compiled a list of the top 12 AI tools that you need to know and should learn how to use as of Feb 2023.
-
Chat GPT
-
Chat Sonic
-
Synthesia
-
Lexica
-
Mid-Journey
-
Well Said Labs
-
Runway
-
Leia Pix
-
Scenario GG
-
Rytr
-
Jasper AI
-
Adept AI
Chat GPT: Chat GPT is like a super smart and funny robot friend who you can talk to and ask anything! It’s like having a genie that can answer all your questions, tell jokes, and even write stories for you. Just be careful what you wish for because Chat GPT is so smart, it might surprise you with its answers! (I asked Chat GPT to give a brief description of itself to someone who has never heard of it but make it funny, that’s what it came back with). You can ask it questions, write code, create copy, write e-mails and just so much more. Take the time to learn all of the prompts and uses for Chat GPT so that it can aid you in your jobs and tasks. Chat GPT’s knowledge base is accurate up to September 2022 so keep that in mind.
Chat Sonic: This tool on the other hand is directly connected to Google’s data source and is equipped with up-to-date knowledge at your disposal and operates very similarly to Chat GPT. The free version gives you up to 25 commands per day.
Synthesia: This tool creates AI videos in up to 120 languages and does all of this with text-based inputs. Rather than having to film people, you can create your scripts in Chat Sonic or Chat GPT, feed all of the content to Synthesia and it will create a 100 per cent AI-based video of realistic talking AI avatars, greenscreen backgrounds and much more.
Lexica: Allows you to find images and give you the prompts needed to create similar styled images in Mid-Journey, Dall-E2 and Stable Diffusion.
Mid-Journey: Like Dall-E2 and Stable Diffusion this AI tool allows you to create graphics and art via text inputs. Tell it exactly what you need out for an image and it will create the image for you within seconds. Pair this up with tools like Canva and you have an amazing combination to create graphics for social media content.
Well-Said Labs: Create 100 per cent sounding voiceovers for your content via text inputs.
Runway: An AI video editing tool that can do some amazing things.
Leia Pix: Turns all 2D graphics into 3D images with depth and allows pictures to breathe.
Scenario GG: Upload your art and this tool creates dozens of variations to the original.
Rytr & Jasper: AI tools to help you create and create content.
Adept AI: General intelligence task-based tool. I’ve people ask Adept AI to create Excel spreadsheets and create the formulas, all via text inputs. You can ask it to search the internet and find you a place to rent within your specified budget and it does all the work for you. Check this one out!
There you have it, 12 AI tools that you need to check out immediately and start learning how to use to save time and money. These tools aren’t going anywhere and only those who remain oblivious to them will face the reality of becoming redundant.
Keron Rose is a digital strategist that works with Caribbean entrepreneurs. Learn more at KeronRose.com or listen to the Digipreneur FM podcast on Apple Podcast/Spotify/Google Podcast.
Can India’s ChatSonic Take On ChatGPT’s Dominance?
]
Writesonic’s AI chat interface called ChatSonic is an alternative to the popular Open AI’s chatbot, but with a twist The primary difference between ChatGPT and ChatSonic is that while the former works with a data set that dates back to 2021, the latter directly integrates Google Search to access the latest data ChatSonic has not been designed to replace humans but to augment their capabilities, the founder said
US-based deeptech decacorn Open AI’s artificial intelligence (AI) chat interface ChatGPT is making headlines around the world for its exceptional ability to interact and respond like a human, among other things.
However, did you know that India-based Writesonic already had a similar chat interface, or chatbot, even before Open AI launched ChatGPT in November 2022.
The startup’s AI chat interface called ChatSonic is an alternative to the popular Open AI’s chatbot, but with a twist. Unlike ChatGPT, ChatSonic integrates Google Search and text-to-speech within its operation, making it efficient enough to produce the most up-to-the-minute answers to your questions.
Founded in 2021, Writesonic is a SaaS startup that primarily offers AI-powered content generation services. Writesonic’s products include ChatSonic and Photosonic, which are AI-powered tools to generate written and image content, respectively.
How ChatSonic Works?
Showing his cards, Writesonic’s founder Samanyou Garg told Inc42 that ChatSonic and ChatGPT are similar in many ways, as they both work on Open AI’s Generative Pre-trained Transformer 3 (GPT-3) language model.
“A lot of underlying tech that we use is powered by Open AI. We have been partners of Open AI for the past three years,” Garg said.
However, the primary difference between ChatGPT and ChatSonic is that while the former works with a data set that dates back to 2021, the latter directly integrates Google Search to access the latest data.
From an inexperienced eye, ChatSonic and ChatGPT could look very similar, with the only difference being that the former was launched in 2021. However, there are many differences between the two AI-driven chatbots.
For starters, ChatSonic generates up-to-date responses using Google Search. “We take the top five or 10 results from Google Search and use the information to create some new content,” explained Garg.
Apart from the integrated Google Search, ChatSonic offers other features such as text-to-speech. The chatbot also offers certain personality types for the responses. Essentially, ChatSonic’s AI will respond to the user’s questions using the word choice and semantics associated with the stereotypes associated with the personality chosen.
Garg told Inc42 that the AI chatbot can even read Excel sheets to allow users to create charts and graphs from the data. The startup has launched applications, which allow Android and iOS users to use ChatSonic on their smartphones, without a browser.
Further, Writesonic will soon be launching a Chrome extension for its chatbot that would allow users to access the interface without opening its webpage.
“We are adding ChatSonic right next to Google. This will enable ChatSonic to show you a one-two paragraph summary so that you don’t have to dig through two-three pages of Google. It would also give you the source of information,” Garg said.
AI-Generated Content: Is This The Future?
Writesonic has been targeting the AI-based content market since its inception. “What we want to do is streamline the content creation process for businesses, for individuals, for freelancers and startups,” Garg said. He added that there are no limits to the use cases, which could be developed using ChatSonic.
According to an Acumen Research and Consulting report, the generative AI market would reach $110.8 Bn in size by 2030, growing at a CAGR of 34.3%.
So far, the startup has been attracting the attention of businesses looking to streamline their content pipelines. The founder noted that over the past three-four weeks, the company has received a lot of queries from big multinational companies to use natural language processing models to create AI-generated content.
Among the many use cases that Garg told us about, the most interesting was how ChatSonic was being implemented as either a search engine for internal documents or as a customer support chatbot, competing with WhatsApp and Zoho for the latter.
Amid all the buzz about AI-generated content, there have been mentions of how AI would automate content creation, essentially putting content writers out of their jobs. The conversation has only picked up more support since the rise of ChatGPT.
However, Garg said ChatSonic would not replace any human.
“The idea here is not to replace humans but to augment their capabilities,” Garg said. He explained that Writesonic’s products could be used to reduce the time it takes to generate content.
“You’re saving 80% of the initial time with the first draft when you’re brainstorming or coming up with ideas,” he added.
Writesonic is a SaaS platform, so its earnings come from the various subscription models it offers to its users. Without revealing any key metrics, Garg said the startup has been profitable from the start and has an infinite runway.
“We don’t need the funding right now. We may take it [the funding] in a couple of months for enterprise introductions or something. But we don’t need to go for a funding round right now,” Garg said.
Interestingly, despite similarities, ChatSonic does not position itself to be a competitor of ChatGPT. However, Garg and his team seem to be building a replacement for ChatGPT. Their take on GPT-3 is very different from Open AI, and by integrating Google Search, ChatSonic might just edge out its more famous counterpart.
Sonic Frontiers director talks about the future of 3D Sonic
]
The latest 3D Sonic game Sonic Frontiers has been nothing short of a sales success for the Sonic Team and has seen them build a stable template for future 3D Sonic adventures. The director behind Sonic Frontiers, Morio Kishimoto, has taken to Twitter to chat with fans and answer questions. Mr. Kishimoto says that work on the next 3D Sonic game is already underway and thankfully he says Sonic Team is listening to fan feedback on what worked in Sonic Frontiers and what didn’t. He explained that the general story in Sonic Frontiers wasn’t handled as well as it could have been and this is something they will need to pay more attention to in future games. Here’s a number of key points rounded up by Sonic fan-site Tails Channel:
The next big step for this new “generation” of Sonic games is to add more playable characters.
The story in Sonic Frontiers wasn’t handled in the best way it could have been, according to Kishimoto, this lead him to express that the next installments should pay even more attention to the script and story even more, and go deeper into the characters.
The constant repetition of previous Sonic levels in games, like Green Hill Zone and Chemical Plant, is going to be avoided more in the future, after fan feedback.
Work on the next main Sonic game has already begun, Kishimoto and his team are exploring what gameplay features would be ideal given the fan feedback.
There are plans to expand the combat even further and make the battles deeper and more immersive.
The “Homing Dash” technique from Sonic Frontiers was discovered on accident by Kishimoto and his team during development, it was decided to leave it in as a possible gameplay bonus for players in the game.
The bigger focus for the Sonic games right now is to provide a good single-player experience, but multiplayer games aren’t being outright dismissed.
Kishimoto won’t be the single director for every upcoming Sonic project, there will be others directors to manage different projects.
The Open Zone gameplay style will be worked on and is being heavily considered for the next main game.
Source / Via
Thanks to GreatSong1 and sonicgalaxy27 for sending in the news tip!
Like this: Like Loading…
Opinion: ChatGPT, create an ad for Coca-Cola in India
]
No, ChatGPT did not create a Cannes Lion-winning ad, neither did it create something truly refreshing.
Before you say ‘Here we go, agencies are getting nervous’. No, this article is not about dissing ChatGPT or putting down AI.
Can’t we all just co-exist? 🙂
AI is revolutionary, truly, like Tesla, the scientist, not the brand, and in the years to come it will make things easy and very accessible to all and sundry. But to say that it will put ad agencies, or creative people, or copywriters, or content writers, or marketers, or designers, or the humble office boys, or the admins, or the receptionists, or the auto-wallas, out of their jobs, is a little dramatic.
Ok, not little, but too dramatic.
Ok, now look at me, look at the article, now back at me (line inspired by the award-winning, much-loved, Old Spice campaign), and now you might have an idea where this is headed – being creative and solving problems in an AI driven world. An article with its title buried within the article! See what we did there?
Simply put, ChatGPT, Chat Sonic, Bard, or any other AI tool is not going to put anything or anybody out of business, not anytime soon, not ever.
But it will prove to be a vital tool in any marketer’s or ad agency’s arsenal. Yes, tool – not the boss. We are nowhere close to Skynet (think Terminator) becoming a reality.
Creativity is evolutionary. It’s the collective consciousness and learnings, imaginative thinking, and has allowed us to invent and innovate. It is the power to imagine, to be able to connect to seemingly unconnected things and create something new.
I actually asked ChatGPT to create an ad for Coca-Cola in India and this is what I got (verbatim copy-paste, honest):
Title: “Refresh Your World with Coca-Cola”
Image: A picture of an ice-cold Coca-Cola being poured into a glass with a slice of lemon, against a backdrop of an iconic Indian landmark such as the Taj Mahal or the Gateway of India.
Caption: “Experience the crisp, refreshing taste of Coca-Cola, now available in India. Savor every sip and refresh your world with the world’s most famous soft drink.”
And now let me tell you a short story about an actual Coca Cola ad that was created some years back. That most of us still remember. And even if you don’t remember it’s still a very interesting story.
This famous adman noticed that people in his village kept mangoes cool in the summer by placing them in a bucket and immersing them in the village well. And voila, thus was born the iconic ‘Thanda Matlab Coca Cola’ campaign.
Didn’t I say this was an interesting story?
Open challenge: We challenge any AI to come up with the insanely hilarious and innovative campaign done by Old Spice some years back. It was not just any agency’s dream but also any brand marketer’s.
Then again, like I said in the beginning, this is not to put down any AI.
AI is revolutionary (and we’re back to the beginning of the article), and AI will greatly help both brand and agency. It will help as a stepping stone to greater thought. It will help give us insights. It will help guide us when we’re stuck. Yes it will also help in reducing turnaround time for things that don’t need too much thought, like maybe a few posts here and there, creating some images, writing a few articles, coding, and..
Ok let me not go into what ChatGPT can do, since you must have already read tons of articles around that by now.
But I’ve made my point. And I’m sure you get the point too.
And BTW, I even told ChatGPT to suggest a thought leadership article for a digital ad agency. This is what it gave me: The Future of Digital Experiences: Trends and Predictions for 2023 and beyond.
( Article is authored by Gaurang Menon, CCO, BC Web Wise.)