When Did Kanye Go Auto Tune

When Did Kanye Go Auto Tune Rating: 6,6/10 4363 votes

Dec 10, 2008 I will have to go against the majority and say in my opinion he didn't fail. I happen to hate auto-tunes with a passion but when Kanye used it he didn't use it as extensively as other artist have. He actually was able to sing and the level of auto-tune that he used was not as high as lets say t-pain or ron browz. May 19, 2011 The cheapest auto tune ups are at any one of the local drive thru auto places. You can also visit Goodyear, Midas and CarX which has real good deals on tune ups. Asked in Musical Instruments. Feb 19, 2014  HipHopDX lists 10 ways Kanye West has impacted Hip Hop. 10 Ways 'The College Dropout' & Kanye West Have Changed The Game. Having the gall to go. Mar 16, 2018 HIS AUTO-TUNE STOPS WORKING LIVE. (Drake, Cardi B, Kendrick Lamar & MORE!) in this video we look at Rappers Without ANY Auto-Tune to see how different they sound! We countdown this list from God. Aug 23, 2009  how to get kanYe's sound. Hear the before and after and u can tell im shitty at singing. I do not support auto-tune i believe its abused but some people are curious and do it wrong.

  1. How Did Kanye Go Broke
  2. When Did Kanye Go Auto Tunes

In January of 2010, Kesha Sebert, known as ‘Ke$ha’ debuted at number one on Billboard with her album, Animal. Her style is electro pop-y dance music: she alternates between rapping and singing, the choruses of her songs are typically melodic party hooks that bore deep into your brain: “Your love, your love, your love, is my drug!” And at times, her voice is so heavily processed that it sounds like a cross between a girl and a synthesizer. Much of her sound is due to the pitch correction software, Auto-Tune.

Sebert, whose label did not respond to a request for an interview, has built a persona as a badass wastoid, who told Rolling Stone that all male visitors to her tour bus had to submit to being photographed with their pants down. Even the bus drivers.

Yet this past November on the Today Show, the 25-year old Sebert looked vulnerable, standing awkwardly in her skimpy purple, gold, and green unitard. She was there to promote her new album, Warrior, which was supposed to reveal the authentic her.

“Was it really important to let your voice to be heard?” asked the host, Savannah Guthrie.

“Absolutely,” Sebert said, gripping the mic nervously in her fingerless black gloves.

“People think they’ve heard the Auto-Tune, they’ve heard the dance hits, but you really have a great voice, too,” said Guthrie, helpfully.

“No, I got, like, bummed out when I heard that,” said Sebert, sadly. “Because I really can sing. It’s one of the few things I can do.”

Warrior starts with a shredding electrical static noise, then comes her voice, sounding like what the Guardian called “a robo squawk devoid of all emotion.”

“That’s pitch correction software for sure,” wrote Drew Waters, Head of Studio Operations at Capitol Records, in an email. “She may be able to sing, but she or the producer chose to put her voice through Auto-Tune or a similar plug-in as an aesthetic choice.”

So much for showing the world the authentic Ke$ha.

Since rising to fame as the weird techno-warble effect in the chorus of Cher’s 1998 song, “Believe,” Auto-Tune has become bitchy shorthand for saying somebody can’t sing. But the diss isn’t fair, because everybody’s using it.

For every T-Pain — the R&B artist who uses Auto-Tune as an over-the-top aesthetic choice — there are 100 artists who are Auto-Tuned in subtler ways. Fix a little backing harmony here, bump a flat note up to diva-worthy heights there: smooth everything over so that it’s perfect. You can even use Auto-Tune live, so an artist can sing totally out of tune in concert and be corrected before their flaws ever reach the ears of an audience. (On season 7 of the UK X-Factor, it was used so excessively on contestants’ auditions that viewers got wise, and protested.)

Indeed, finding out that all the singers we listen to have been Auto-Tuned does feel like someone’s messing with us. As humans, we crave connection, not perfection. But we’re not the ones pulling the levers. What happens when an entire industry decides it’s safer to bet on the robot? Will we start to hate the sound of our own voices?

They’re all zombies!

They’re all zombies!

Auto-Tune has now become bitchy shorthand for saying somebody can’t sing

Cher’s late ‘90s comeback and makeover as a gay icon can entirely be attributed to Auto-Tune, though the song's producers claimed for years that it was a Digitech Talker vocoder pedal effect. In 1998, she released the single, “Believe,” which featured a strange, robotic vocal effect on the chorus that felt fresh. It was created with Auto-Tune.

The technology, which debuted in 1997 as a plug-in for Pro Tools (the industry standard recording software), works like this: you select the key the song is in, and then Auto-Tune analyzes the singer’s vocal line, moving “wrong” notes up or down to what it guesses is the intended pitch. You can control the time it takes for the program to move the pitch: slower is more natural, faster makes the jump sudden and inhuman sounding. Cher’s producers chose the fastest possible setting, the so-called “zero” setting, for maximum pop.

“Believe” was a huge hit, but among music nerds, it was polarizing. Indie rock producer Steve Albini, who’s recorded bands like the Pixies and Nirvana, has said he thought the song was mind-numbingly awful, and was aghast to see people he respected seduced by Auto-Tune.

How Did Kanye Go Broke

“One by one, I could see that my friends had gone zombie. This horrible piece of music with this ugly soon-to-be cliché was now being discussed as something that was awesome. It made my heart fall,” he told the Onion AV Club in November of 2012.

The Auto-Tune effect spread like a slow burn through the industry, especially within the R&B and dance music communities. T-Pain began Cher-style Auto-Tuning all his vocals, and a decade later, he’s still doing it.

“It’s makin’ me money, so I ain’t about to stop!” T-Pain told DJ Skee in 2008.

“It’s makin’ me money, so I ain’t about to stop!”

Kanye West did an album with it. Lady Gaga uses it. Madonna, too. Maroon 5. Even the artistically high-minded Bon Iver has dabbled. A YouTube series where TV news clips were Auto-Tuned, “Auto-Tune the News”, went viral. The glitchy Auto-Tune mode seems destined to be remembered as the “sound” of the 2000s, the way the gated snare (that dense, big, reverb-y drum sound on, say, Phil Collinssongs) is now remembered as the sound of the ‘80s.

Auto-Tune certainly isn’t the only robot voice effect to have wormed its way into pop music. In the ‘70s and early ‘80s, voice synthesizer effects units became popular with a lot of bands. Most famous is the Vocoder, originally invented in the 1930s to send encoded Allied messages during WWII. Proto-techno groups like New Order and Kraftwerk (ie: “Computer World,”) embraced it. So did American early funk and hip hop groups like the Jonzun Crew.

‘70s rockers gravitated towards another effect, the talk box. Peter Frampton (listen for it on “Do you Feel Like We Do”) and Joe Walsh (used it on “Rocky Mountain Way”) liked its similar-to-a-vocoder sound. The talk box was easier to rig up than the Vocoder — you operate it via a rubber mouth tube when applying it to vocals. But it produces massive amounts of slobber. In Dave Tompkins’ book, How to Wreck a Nice Beach, about the history of synthesized speech machines in the music industry, he writes that Frampton’s roadies sanitized his talk box in Remy Martin Cognac between gigs.

The use of showy effects usually have a backlash. And in the case of the Auto-Tune warble, Jay-Z struck back with the 2009 single, D.O.A., or “Death of Auto-Tune.”

I know we facing a recession
But the music y'all making going make it the great depression
All y'all lack aggression
Put your skirt back down, grow a set man
Nigga this shit violent
This is death of Auto-Tune, moment of silence

That same year, the band Death Cab for Cutie showed up at the Grammys wearing blue ribbons to raise awareness, they told MTV, about “rampant Auto-Tune abuse.”

The protests came too late, though. The lid to Pandora’s box had been lifted. Music producers everywhere were installing the software.


Everybody uses it

Everybody uses it

“I’ll be in a studio and hear a singer down the hall and she’s clearly out of tune, and she’ll do one take,” says Drew Waters of Capitol Records. That’s all she needs. Because they can fix it later, in Auto-Tune.

There is much speculation online about who does — or doesn’t — use Auto-Tune. Taylor Swift is a key target, as her terribly off-key duet with Stevie Nicks at the 2010 Grammys suggests she’s tone deaf. (Label reps said at the time something was wrong with her earpiece.) But such speculation is naïve, say the producers I talked to. “Everybody uses it,” says Filip Nikolic, singer in the LA-based band, Poolside, and a freelance music producer and studio engineer. “It saves a ton of time.”

On one end of the spectrum are people who dial up Auto-Tune to the max, a la Cher / T-Pain. On the other end are people who use it occasionally and sparingly. You can use Auto-Tune not only to pitch correct vocals, but other instruments too, and light users will tweak a note here and there if a guitar is, say, rubbing up against a vocal in a weird way.

“I’ll massage a note every once in a while, and often I won’t even tell the artist,” says Eric Drew Feldman, a San Francisco-based musician and producer who’s worked with The Polyphonic Spree and Frank Black.

But between those two extremes, you have the synthetic middle, where Auto-Tune is used to correct nearly every note, as one integral brick in a thick wall of digitally processed sound. From Justin Bieber to One Direction, from The Weeknd to Chris Brown, most pop music produced today has a slick, synth-y tone that’s partly a result of pitch correction.

However, good luck getting anybody to cop to it. Big producers like Max Martin and Dr. Luke, responsible for mega hits from artists like Ke$ha, Pink, and Kelly Clarkson, either turned me down or didn’t respond to interview requests. And you can’t really blame them.

“Do you want to talk about that effect you probably use that people equate with your client being talentless?”

Um, no thanks.

In 2009, an online petition went around protesting the overuse of Auto-Tune on the show Glee. Those producers turned down an interview, too.

The artists and producers who would talk were conflicted. One indie band, The Stepkids, had long eschewed Auto-Tune and most other modern recording technologies to make what they call “experimental soul music.” But the band recently did an about face, and Auto-Tuned their vocal harmonies on their forthcoming single, “Fading Star.”

Were they using Auto-Tune ironically or seriously? Co-frontman Jeff Gitelman said,

“Both.”

“For a long time we fought it, and we still are to a certain degree,” said Gitelman. “But attention spans are a certain way, and that’s how it is…we just wanted it to have a clean, modern sound.”

Hanging above the toilet in San Francisco’s Different Fur recording studios — where artists like the Alabama Shakes and Bobby Brown have recorded — is a clipping from Tape Op magazine that reads: “Don’t admit to Auto-Tune use or editing of drums, unless asked directly. Then admit to half as much as you really did.”

Different Fur’s producer / engineer / owner, Patrick Brown, who hung the clipping there, has recorded acts like the Morning Benders, and says many indie rock bands “come in, and first thing they say is, ‘We don’t tune anything,’” he says.

Brown is up for ditching Auto-Tune if the client really wants to, but he says most of the time, they don’t really want to. “Let’s face it, most bands are not genius.” He’ll feel them out by saying, with a wink-wink-nod-nod: “Man, that note’s really out of tune, but that was a great take.” And a lot of times they’ll tell him, go ahead, Auto-Tune it.

Marc Griffin is in the RCA-signed band 2AM Club, which has both an emcee and a singer (Griffin’s the singer.) He first got Auto-Tuned in 2008, when he recorded a demo with producer Jerry Harrison, the former keyboardist and guitarist for the Talking Heads.

“I sang the lead, then we were in the control room with the engineer, and he put ‘tune on it. Just a little. And I had perfect pitch vocals. It sounded amazing. Then we started stacking vocals on top of it, and that sounded amazing,” says Griffin.

Now, Griffin sometimes records with Auto-Tune on in real time, rather than having it applied to his vocals in post-production, a trend producers say is not unusual. This means that the artist hears the tuned version of his or her voice coming out of the monitors while singing.

“Every time you sing a note that’s not perfect, you can hear the frequencies battle with each other,” Griffin says, which sounds kind of awful, but he insists it “helps you hear what it will really sound like.”

Singer / songwriter Neko Case kvetched about these developments in an interview with online music magazine, Pitchfork. “I'm not a perfect note hitter either but I'm not going to cover it up with auto tune. Everybody uses it, too. I once asked a studio guy in Toronto, ‘How many people don't use Auto-Tune?’ and he said, ‘You and Nelly Furtado are the only two people who've never used it in here.’ Even though I'm not into Nelly Furtado, it kind of made me respect her. It's cool that she has some integrity.”

That was 2006. This past September, Nelly Furtado released the album, The Spirit Indestructible. Its lead single is doused in massive levels of Auto-Tune.

Dr. Evil

Dr. Evil

Somebody once wrote on an online message board that the guy who created Auto-Tune must “hate music.” That could not be further from the truth. Its creator, Dr. Andy Hildebrand, AKA Dr. Andy, is a classically trained flautist who spent most of his youth playing professionally, in orchestras. Despite the fact that the 66-year old only recently lopped off a long, gray ponytail, he’s no hippie. He never listened to rock music of his generation.

“I was too busy practicing,” he says. “It warped me.”

The only post-Debussy artist he’s ever gotten into is Patsy Cline.

Hildebrand’s company — Antares — nestled in an anonymous looking office park in the mountains between Silicon Valley and the Pacific Coast, has only ten employees. Hildebrand invents all the products (Antares recently came out with Auto-Tune for Guitar). His wife is the CFO.

Hildebrand started his career as a geophysicist, programming digital signal processing software which helped oil companies find drilling spots. After going back to school for music composition at age 40, he discovered he could use those same algorithms for the seamless looping of digital music samples, and later for pitch correction. Auto-Tune, and Antares, were born.

Watch Diamond Factory, Anthrax Investigation, Auto-Tune, Luis.. on PBS. See more from NOVA scienceNOW.

Auto-Tune isn’t the only pitch correction software, of course. Its closest competitor, Melodyne, is reputed to be more “natural” sounding. But Auto-Tune is, in the words of one producer, “the go-to if you just want to set-it-and-forget-it.”

In interviews, Hildebrand handles the question of “is Auto-Tune evil?” with characteristic dry wit. His stock answer is, “My wife wears makeup, does that make her evil?” But on the day I asked him, he answered, “I just make the car. I don’t drive it down the wrong side of the road.”

“I just make the car. I don’t drive it down the wrong side of the road.”

The T-Pains and Chers of the world are the crazy drivers, in Hildebrand’s analogy. The artists that tune with subtlety are like his wife, tasteful people looking to put their best foot forward.

Another way you could answer the question: recorded music is, by definition, artificial. The band is not singing live in your living room. Microphones project sound. Mixing, overdubbing, and multi-tracking allow instruments and voices to be recorded, edited, and manipulated separately. There are multitudes of effects, like compression, which brings down loud sounds and amplifies quiet ones, so you can hear an artist taking a breath in between words. Reverb and delay create echo effects, which can make vocals sound fuller and rounder.

When recording went from tape to digital, there were even more opportunities for effects and manipulation, and Auto-Tune is just one of many of the new tools available. Nonetheless, there are some who feel it’s a different thing. At best, unnecessary. At worst, pernicious.

“The thing is, reverb and delay always existed in the real world, by placing the artist in unique environments, so [those effects are] just mimicking reality,” says Larry Crane, the editor of music recording magazine, Tape Op, and a producer who’s recorded Elliott Smith and The Decemberists. If you sang in a cave, or some other really echo-y chamber, you’d sound like early Elvis, too. “There is nothing in the natural world that Auto-Tune is mimicking, therefore any use of it should be carefully considered.”

“I’d rather just turn the reverb up on the Fender Twin in the troubling place,” says Arizona indie rock pioneer Howe Gelb, of the band Giant Sand. He describes Auto-Tune and other correction plug-ins as “foul” in a way he can’t quite put his finger on. ”There’s something embedded in the track that tends to push my ear away.”

Lee Alexander, one time boyfriend of Norah Jones and bass player and producer for her country side project, The Little Willies, used no Auto-Tune on their two records, and says he doesn’t even own the program. Serum full crack fl studio 12 5 1 5 full with crack.

“Stuff is out of tune everywhere…that to me is the beauty of music,” he wrote in an email.

In 2000, Matt Kadane of the band The New Year, and his brother, Bubba covered Cher’s “Believe”, complete with Auto-Tune. They did it in their former Texas Slo-Core band, Bedhead. Kadane told me hated the original “Believe,” and had to be talked into covering it, but had surprisingly found that putting Auto-Tune on his vocals “added emotional weight.” He hasn’t, however, used Auto-Tune since.

“It’s one thing to make a statement with hollow, disaffected vocals, but it’s another if this is the way we’re communicating with each other,” he says.

For some people, I said, it seems that Auto-Tune is a lot like dudes and fake boobs. Some dudes see fake boobs, they know they’re fake, but they get an erection anyway. They can’t help themselves. Kadane agreed that it “can serve that function.”

“But at some point you’d say ‘that’s fucked up that I have an erection from fake boobs!’” he says. “And in the midst of experiencing that, I think ideally you have a moment that reminds you that authenticity is still possible. And thank God not everything in the world is Auto-Tuned.”

The Beatles actually suck

The Beatles actually suck

Does your brain get rewired to expect perfect pitch?

The concept of pitch needing to be “correct” is a somewhat recent construct. Cue up the Rolling Stones’ Exile on Main St., and listen to what Mick Jagger does on “Sweet Virginia.” There are a lot of flat and sharp notes, because, well, that’s characteristic of blues singing, which is at the roots of rock and roll.

“When a (blues) singer is ‘flat’ it’s not because he’s doing it because he doesn’t know any better. It’s for inflection!” says Victor Coelho, Professor of Music at Boston University.

Blues singers have traditionally played with pitch to express feelings like longing or yearning, to punch up a nastier lyric, or make it feel dirty, he says. “The music is not just about hitting the pitch.”

Of course that style of vocal wouldn’t fly in Auto-Tune. It would get corrected. Neil Young, Bob Dylan, many of the classic artists whose voices are less than pitch perfect – they probably would be pitch corrected if they started out today.

John Parish, the UK-based producer who’s worked with PJ Harvey and Sparklehorse, says that though he uses Auto-Tune on rare occasions, he is no fan. Many of the singers he works with, Harvey in particular, have eccentric vocal styles -- he describes them as “character singers.” Using pitch correction software on them would be like trying to get Jackson Pollock to stay inside the lines.

“I can listen to something that can be really quite out of tune, and enjoy it,” says Parish. But is he a dying breed?

“That’s the kind of music that takes five listens to get really into,” says Nikolic, of Poolside. “That’s not really an option if you want to make it in pop music today. You find a really catchy hook and a production that is in no way challenging, and you just gear it up!”

If you’re of the generation raised on technology-enabled perfect pitch, does your brain get rewired to expect it? So-called “supertasters” are people who are genetically more sensitive to bitter flavors than the rest of us, and therefore can’t appreciate delicious bitter things like IPAs and arugula. Is the Auto-Tune generation likewise more sensitive to off key-ness, and thus less able to appreciate it? Some troubling signs point to ‘yes.’

“I was listening to some young people in a studio a few years ago, and they were like, ‘I don’t think The Beatles were so good,’” says producer Eric Drew Feldman. They were discussing the song “Paperback Writer.” “They’re going, ‘They were so sloppy! The harmonies are so flat!”

Just make me sound good

Just make me sound good

John Lennon famously hated his singing voice. He thought it sounded too thin, and was constantly futzing with vocal effects, like the overdriven sound on “I Am the Walrus.” I can relate. I love to sing, and in my head, I hear a soulful, husky, alto. What comes out, however, is a cross between a child in the musical Annie, and Gretchen Wilson: nasal, reedy, about as soulful as a mosquito. I’m in a band and I write all the songs, but I’m not the singer: I wouldn’t subject people to that.

Producer and Editor Larry Crane says he thinks lots of artists are basically insecure about their voices, and use Auto-Tune as a kind of protective shield.

“I’ve had people come in and say I want Auto-Tune, and I say, ‘Let’s spend some time, let’s do five vocal takes and compile the best take. Let’s put down a piano guide track. There’s a million ways to coach a vocal. Let’s try those things first,’” he says.

Recently, I went over to a couple-friend’s house with my husband, to play with Auto-Tune. The husband of the couple, Mike, had the software on his home computer – he dabbles in music production – and the idea was that we’d record a song together, then Auto-Tune it.

We looked for something with four-part harmony, so we could all sing, and for a song where the backing instrumental was available online. We settled on Boyz II Men’s “End of the Road.” One by one we went into the bedroom to record our parts, with a mix of shame and titillation not unlike taking turns with a prostitute.

When we were finished, Mike played back the finished piece, without Auto-Tune. It was nerve wracking to listen to, I felt like my entire body was cringing. Although I hit the notes OK, there was something tentative and childlike about my delivery. Thank God these are my good friends, I thought. Of course they were probably all thinking the same thing about their performances, too, but in my mind, my voice was the most annoying of all, so wheedling and prissy sounding.

Then Mike Auto-Tuned two versions of our Boys II Men song: one with Cher / T-Pain style glitchy Auto-Tune, the other with “natural” sounding Auto-Tune. The exaggerated one was hilariously awesome – it sounded just like a generic R&B song.

When Did Kanye Go Auto Tunes

But the second one shocked me. It sounded like us, for sure. But an idealized version of us. My husband’s gritty vocal attack was still there, but he was singing on key. And something about fine-tuning my vocals had made them sound more confident, like smoothing out a tremble in one’s speech.

The Auto-Tune or not Auto-Tune debate always seems to turn into a moralistic one, like somehow you have more integrity if you don’t use it, or only use it occasionally. But seeing how really innocuous-yet-lovely it could be, made me rethink. If I were a professional musician, would I reject the opportunity to sound, what I consider to be, “my best,” out of principle?

The answer to that is probably no. But then it gets you wondering. How many insecure artists with “annoying” voices will retune themselves before you ever have a chance to fall in love?


Video stills from:
TiK ToK by Ke$ha
Animal by Ke$ha
Believe by Cher
In The Air Tonight by Phil Collins
Buy U A Drink by T-Pain
Hung Up in Glee
Big Hoops by Nelly Furtado
Piano Fire by Sparklehorse and P.J. Harvey
Imagine by John Lennon

If i were a professional musician, would I reject the opportunity to sound 'my best,' out of principal?
Updated 5:27 PM EST Feb 19, 2020

When Auto-tune was released back in 1998, it was surprisingly well-received. Cher’s “Believe” was considered the first pop song to usher in the software of Auto-Tune. It received a Grammy for Best Dance Recording and appropriately marks the turn of the century within the music world. It set the precedent for what early 2000’s music would become. Auto-tune as well as other forms of voice alteration, have since become a mainstay in the music industry and has been found in even the more peculiar areas of the music community. Indie rock and rap. But before delving into its popularity, you have to consider its precursors.

The first instances of voice alteration date back to 1928, when Homer Dudley, an electroacoustic engineer at Bell Labs, began to develop the vocoder. The tool was made to alter voice pitch and frequency and to be used as a speech coder for US war efforts. Though arbitrary at the time, the end product was sufficient enough to be used during World War II.

Now the history of vocoders use in music is extensive and difficult to tackle outright. In the 50’s, a German scientist, Werner Meyer-Eppler, wrote a thesis on voice synthesis and began looking into electronic music as a whole. Around this time, electronic music innovator and founder of Moog Music, Robert Moog, began to make headlines as well. Moog’s innovative process to electronic music was paramount to the development of realistically all facets of electronic music today.

Moog began by creating theremins, an electronic instrument that is played without actual touch, then slowly ventured into the world of synthesizers. It wasn’t until 1968 that Moog developed the first musical vocoder. Through more development and fine tuning, the vocoder would basically receive a vocalist’s voice through a mic, then would be processed through the keys on a synthesizer. Essentially, one was able to use their own voice as a tangible instrument.

The invention, along with the advent of analog synthesizers, was immediately popular in the seventies and eighties. Styx’s “Mr. Roboto”, Phil Collins’ in “In the Air Tonight”, Queen’s “Radio Ga Ga”, these hits were widely popular and shows the swift and insurmountable impact the vocoder had once it became mainstream.

One of the keystones to early vocoder work is Kraftwerk’s 1974 album AutoBahn. Kraftwerk is groundbreaking in their own right and detailed the immense possibilities that synthesizers, and vocoders, had to offer. They showcase this in the opening title track, a twenty-two minute saga of synth use, as a staple in early synth work. Coincidentally, this is the first track the band ever uses vocals on. Kraftwerk’s lead singer, Ralf Hütter, implements the use of the vocoder through the song’s chorus, singing “Wir fahren, fahren, fahren, auf der Autobahn” or “We drive, drive, drive, on the Autobahn”. There is something about this album that is so wonderfully seventies, between the depiction of actual enjoyment of driving on the highway or Kraftwerk’s implementation of retro-futuristic ideas.

The vocoder since has evolved, as instrumentalists and innovators have given new means to alter one’s voice. It wasn’t until the seventies however, that we would begin to see the beginnings of modern day auto-tune. Antares Audio Technologies is the company responsible for today’s auto-tune and was created by Dr. Andy Hildebrand, an electrical engineer. Hildebrand got his degree from the University of Illinois in 1976 with a Ph. D. in electrical engineering and subsequently held a job at Exxon, working with “seismic processing research”. His bio from the Antares website reads “Andy and John (Hildebrand’s partner at Exxon) left Exxon about 1979 to start a geophysical consulting company named Cyberan.” The duo went on to create programs that would further seismic data analysis. Utilizing soundwaves and audio, Hildebrand was able to search for oil. Hildebrand combined the use of geographic sciences and audio engineering to determine the depths of oil while recording data pertinent to oil companies. Hildebrand furthered this work into Landmark Graphics Corporation, where he and his colleagues continued seismic processing.

Hildebrand left the company in 1989 and went on to pursue a music composition degree at the illustrious Shepard School of Music at Rice University. In 1990, Hildebrand formed Jupiter Systems, which would later be known as Antares Audio, in order to provide a platform for his computer software. He then went on to create his first form of Pro Tools plug-ins, such as the Multiband Dynamics Tool and the Jupiter Voice Processor, both of which would evolve into Auto-Tune in 1997.

But now to discern the differences between vocoders, auto-tune, and talk boxes.

Auto-tune works simply as a pitch modifier, as this was its original intent. Vocalists who sing off-key can be corrected as auto-tune adjusts the singer’s vocals to the nearest pitch. Whether done live, or in studio, auto-tune is everywhere. Vocoders are more of an actual effect rather than a tool. With the vocoder, an input is put through a multi-band EQ, where it is then processed through various alterations the creator chooses. The Talk box on the other hand works similarly to a vocoder but uses a slightly different process. Notable from Peter Frampton’s popularization of the effect, talk boxes feature a small plastic tube that the vocalist speaks into. The talk box has its own speaker. Essentially, one runs their instrument through the talk box, which then amplifies out of your mouth and through a traditional mic. Your mouth acts as an amplifier in a sense.

The talk box has its own vast place in popular music history. It has since though taken a new shape in electropop and dancepop. See Chromeo and their song “Fall Back 2U” or French electronic house group Daft Punk and their hit “Robot Rock”. Both groups utilize the talk box to intensify their disco vibes and head bobbing dance grooves, separate from some of the more traditional classic rock uses of the talk box.

Which leads us to the present music industry and the immensely intuitive and groundbreaking ways artists are using the softwares and effects now.

Enter Justin Vernon, frontman and creator of indie rock stalwart Bon Iver. With his debut album For Emma, Forever Ago in 2008, Vernon showcased his ability to create atmospheric textures with limited computer use. On the opening track, “Flume”, Vernon simply uses backing guitar tracks and slight ambient noises to accompany his vocals. Vernon carries out this idea through much of the album as his acoustic guitar tracks with subtle dissonances and forlorn chords backtrack his vocals. Vernon almost implements vocoder-like abilities without the actual effect. In his debut album and follow-up self-titled album, Vernon records his vocal tracks over one another, creating a chorus of overlaying octaves and voices. His approach at singing is almost angelic at times. Vernon also implements reverb into his many voices, adding to the overall sound. Impassioned, Vernon is one of the greatest of today’s age at replicating his emotion into song. It’s as if there is no translation loss from thought to music, a direct connection of creation.

Vernon debuted an EP in 2009 titled Blood Bank. The first three tracks were reminiscent of the previous characteristics from his debut album while they still implored a similar charm. However, the final song on the album is what Bon Iver fans hype about most, “Woods”. “Woods” is rudimentary, singular, and basic in all the right ways. The song itself begins with a single auto-tuned vocal track. A four-line phrase is developed over and over again, each repetition bringing a new voice and adding to the overall texture. The result is an overwhelming cacophony of thickly layered vocals and a chorus of robotic, yet fervent music.

The track serves as a cornerstone of Justin Vernon’s increasing popularity and single-handedly ushers him within the contention of modern creative success. This garnered lots of attention for Vernon, particularly by the hip hop and R&B community. Rapper and hip hop artists alike were struck by the resounding beauty of Vernon’s vocals and began to collaborate with him on tracks. Most notably of these was rap genius Kanye West. West is infamous for his use of voice alteration and has extensively added modulation and distortion to many of his tracks.

On West’s 2010 album My Beautiful Dark Twisted Fantasy, Vernon appears on “Monster” and “Lost In The World”. “Monster” features a more aggressive Vernon while “Lost In The World” samples his “Woods” as he also supplies various “ohs” throughout the background.

Kanye continued to collaborate with Vernon in his 2013 album Yeezus. Vernon supplied the choruses to “Hold My Liquor” and “I’m In It”. Kanye has a knack for extending vocals into beats and other unique ways, and when mixed with Vernon’s genuine tones, ingenuity ensued.

After his early work with Kanye, Vernon found himself immersed in hip hop.

Another artist often mentioned with Justin Vernon collaborations is Francis Farewell Starlite, a producer and vocalist for Francis and the Lights. Starlite invited Vernon to sing “Friends” on the album Farewell, Starlite. What emerged was a wonderful track that implemented synth-pop tones and arching Vernon vocal lines; an uplifting vibe. Starlite and Vernon soon find themselves on Chance the Rapper’s similar rendition to the song, titled “Summer Friends”. Chance raps over Vernon’s vocals and creates a balanced sound of harmony and rhythm.

Vernon can be found on Travis Scott’s “Naked” and P.O.S.’s “How We Land” to name a few more and even produced Vince Staples’ “Crabs In The Bucket”, the opening track to Staples’ latest album Big Fish Theory.

Now Vernon’s initial project with Francis and the Lights is immensely important to Vernon’s later work. Used on “Friends” is a program similar to a vocoder titled a “Prismizer”, something Starlite dreamed up. Starlite was able to take Vernon’s vocals, post-production, and add choral harmonies to his voice. Whereas both vocoders and the Prismizer take vocals and layer them with textures, the Prismizer separates itself by its unique choral-like characteristics. With the vocoder, artists sing their pitch then add voices through keys on a synthesizer or through a computer, altering pitch and frequency.

The Prismizier doesn’t always have to sound robotic though. You can hear it used nicely on Chances’ track “How Great” where clean vocals are favored. Popular R&B crooner Frank Ocean also utilizes various forms of vocoders. You can hear it often, in songs such as “Provider”, “Chanel”, and even more heavily in “Nikes”.

Bon Iver received two Grammys back in 2012 for their success on their second studio album, Bon Iver, and has since furthered the indie music scene to new territories. Justin Vernon’s vocals were more polished and better produced and had an almost ten-piece band to back him up. Horns, percussion, two drum kits, multiple guitarists, this period of Bon Iver saw immense success. It wasn’t soon after however that Vernon would take time off with the band, citing he was “winding down” due to the stress brought on with the newfound publicity and the rigorousness of tours.

Bon Iver crept back onto the scene the September of 2016 to produce 22, A Million. A succinct 34-minute, 10-song album, 22, A Million captures Vernon and all of his creative talents. His third studio album is a step away from past projects. 22, A Million, features extensive use of synthesizers and voice alteration all while keeping that signature Bon Iver sound. Vernon, impacted from his work with Starlite, implements the Prismizer heavily throughout the album and can be heard as soon as the opening track. “22 (Over Soon)' sets the precedent for the rest of the album. You here the twang of his usual guitar accompaniment with sparse piano licks, gracing sax lines, and even a taste of sampling. What Bon Iver was able to do with the vocoder for this album however is something truly different. Vernon envelopes himself in this new identity.

On the ensuing track “10 Deathbreast” Vernon sets his vocals to a heavy, distorted drum beat. Vernon even alters the tone of his vocoder halfway through phrases, making his voice sporadic yet seemingly purposely place. The next track, “715 Creeks” may be Vernon’s most real. He invokes a melody that is reminiscent of a song Vernon did with James Blake title “Fall Creek Boys Choir”. The song consists of nothing but Vernon’s vocals, supplemented with heavy vocoder. Vernon carries through some phrases with larger, bigger dynamics and others with a whisper. The sense of longing, love loss, traversed masterfully through the tune.

Vernon follows with “33 “God””. This piece seems like Vernon’s getting up after a knockout. Slowed and sluggish at first, Vernon is reflective through a repeated piano progression and accentuated synth-like string sounds in the background. Heavy synthesized bass line and upbeat drums ensues and the song becomes one of the most grooving on the album. Shouts and ambient percussive sounds provide the piece with extra clamor. Little snitch 3.7 4 license key west.

The following track is a calmer break from the explosiveness that is the first half of the album. “29 #Stafford APTS” showcases continued strumming guitar. Vernon uses less vocoder through the verse and has Bon Iver drummer Sean Carey accompany with vocals. At times, the duo uses vocoders to create robotic vocal tones, a segue from the extended vocal harmonies.

Bon Iver is great at affectively adding bass lines to hits within the music. The appropriate placement of the bass figures impacts the songs in a powerful way and can be seen on “666 ʇ”. The seventh track on the album, and arguably most experimental, is a three-minute ambient soundscape. Though it lulls through the first two-minutes, the piece erupts into a chorus of sounds until a saxophone, laden with a vocoder, closes out the song, with a sporadic couple of lines.

“8 (circle)”, coincidentally the eighth track on the album, draws upon the spacious synth and sax accompaniment from “Beth/Rest” on the self-titled album. Through the next song, “____45_____” a saxophone is put through a vocoder, similar to some of the previous tracks, but is significantly more melodic and backtracks Vernon’s swooning chorus of “I been caught in fire”. A solo bajo line fades out as the song ends.

The album ends with “00000 Million”, and a fitting end to such music. The song doesn’t separate itself by instrumentation but places emphasize on vocals and lyrics and even features one of the few vocal lines that does not utilize a vocoder.

Justin Vernon and Bon Iver are epistemic of intuition and artistic integrity, though no one in the indie industry is pushing the acceptance of vocal alteration quite like indie and electropop artist The Japanese House.

The Japanese House is the brainchild of Buckinghamshire native Amber Bain and utilizes vocoders as an accentuation of her voice. Although just a few years since their inception, The Japanese House is widely known for its vocoder use and continues to use rolling guitar licks and almost aqueous-like sounds to accompany their subdued lyrics.

In her debut EP Pools To Bathe In, Bain has just four tracks to get across her emotional grief and sentiment while also displaying her musical epistemology: synth-heavy verses, occasional clap tracks and most importantly, consistent use of a vocoder. Bain embraced her use of vocal alteration early on in her career and has continued its use since. Take the title track of her debut EP. The verse is filled with heavy vocals, but upon the arrival of the chorus, the song turns atmospheric as Bain relies on expansive synth and almost inaudible lyrics. “Still” on the same EP, is one of her most powerful tracks as she discusses topics of self-doubt, vagueness within relationship issues and impending dread of breakups, all while Bain’s extensive vocals emphasize the lonesome of her song.

The Japanese House is one the most prominent examples of utilizing vocoders as a creative decision. Occasionally, Bain uses these effects to emphasize her voice such as in the songs “Saw You In A Dream” or “Face Like Thunder”. Here, her voice is like glass, glossing over lyrics that tend to have an upbeat feel to them, compared to when her use of a vocoder totally encompasses her tone like in “Still” or “Clean”. For Bain, it seems that the more somber the track, the thicker her voice becomes.

Her EP was a microcosm of what her later work would become, each new release a succinct four-track EP that details loss and passion. With just four EPs to date, Bain’s portfolio is brief but showcases a truly incredible way of when vocoders are used at their fullest potential.

Thick vocoder use can also be heard in the haunting tunes “Hide and Seek” by Imogen Heap, “Hymn of Acxiom” by Vienna Teng, “woah” by Eden Project, and even Coldplay’s “Midnight”, as well as artists James Blake and Connor Youngblood who use it in more subtle ways.

As for the rap and hip hop world in general, there is no doubting voice alterations significant impact. As previously stated, Kanye is a large proponent of altering voice to gain specific effects. Moreover, Kanye favors using the human voice and all its components. Take “Two Words” on Graduation Day, where he recruits The Boys Choir of Harlem or “Jesus Walks” on the same album where West literally uses a choir to create rhythms and bass lines or even “Hey Mama” on Late Registration. A video by Vox on Youtube goes into this concept deeply, as Vox details the ways Kanye crafts vocals into unique way on these tracks.

Though heard sparingly in his first three albums, such as in Kanye’s rendition of Daft Punk’s “Harder, Better, Faster, Stronger” (which Kanye simply dubbed “Stronger”), auto-tune is used. Though it’s evident that Daft Punk’s hardy synth backdrop is the primary reason why voice modulation, its clear that Kanye would soon use the tool lengthily. Graduation is exemplative of West’s earlier uses of auto-tune, such as on the T-Paine featured “Good Life”.

When West released his fourth studio album 808s and Heartbreaks, he had already established himself as an innovator within the music industry, on this album particularly though, West draws upon the increasingly popular use of auto-tune and adjusts it to a very “Kanye-esque” sound. On “Love Lockdown” and “Heartless” you hear the new distorted take on Kanye’s voice that he seemingly so fondly used. He uses it to almost detail a certain emotion. The distortion becomes a new voice, a new feeling. A clean auto-tune is used on almost every song, while Kanye occasionally implements alterations to his voice. Whether it be distortion, overlaying, or a combination of the two, Kanye’s take on voice modulation is surely distinct.

It wasn’t till My Beautiful Dark Twisted Fantasy that you hear West become more liberal with the auto-tune. Besides the tracks with Vernon as previously stated, Kanye uses throughout. Even on the opening track “Dark Fantasy”, you can hear voice alteration subtly in the background through the intro. Or take the more aggressive use of its possibilities on “Power”. Kanye’s use of the software is entirely too lengthy to detail in whole but it’s important to know that through the backlash that he, and many others who also use auto-tune, receive, he keeps on pushing the programs possibilities. Kanye’s not a great singer. If his voice sounds on pitch, that’s merely a biproduct of the creative decision to use voice alteration in the first place.

When Kanye released Yeezus it was also around the time that Kanye’s private and public life began to merge. The succession of Kanye’s albums is fascinating to witness when juxtaposed with what he he was going through with the media and such. Thus Yeezus, is, and was, wonderfully weird. Whether on “Black Skinhead” or “Hold My Liquor” West uses his darker, distorted use of voices to eventually end the album with an upbeat “Bound 2” that allows Kanye to say “all is good”.

At this point in Kanye’s career it seems he could release anything and fans would enjoy it, which is probably not too far from the truth. The Life Of Pablo released February 2016, and opens with a more mature Kanye. It features an evident use of auto-tune and a wonderful feature of Chance the Rapper later in the song. Kanye, essentially, is all over the place on this album.

Auto-tune and vocoders can be heard through many of raps foundational artists such as on 2Pac and Dr. Dre’s “California Love”, which utilizes auto-tune for a catchy hook. Also on Snoop Dogg’s “Sexual Eruption” and Outkast’s “Synthesizer”, which features funk hero Geroge Clinton. It is even heard more subtly like in Rihanna’s 2008 hit “Disturbia” and Kesha’s 2010 pop hit “Tik Tok”.

The use of vocoders, and more particularly auto-tune, is a widely known practice. Thus, topics are impossible to discuss without the acknowledgement of T-Pain.

There is one name that is almost entirely synonymous with Auto-tune, and that is T-Pain. T-Pain released his first studio album Rappa Ternt Sanga in 2005 and featured infamous hits “I’m Sprung” and “I’m N Luv (Wit a Stripper)”. Rappa Ternt Sanga peaked at 33rd on the Billboard 200 in 2005. Since, T-Pain has graced listeners with banger after banger. Whether his own songs, or just featuring on others’, T-Pain had a knack for producing top hits during the latter 2000’s.

By 2008, T-Pain had released his third studio album THR33 RINGZ, cementing himself as a R&B and hip hop mainstay. The album featured the widely popular “Can’t Believe It” with Lil Wayne. This song combed two of rap’s most prominent rappers and vocalists and represented T-Pain’s peak. The song peaked at seventh on the charts and marked the second time T-Pain worked with Lil Wayne.

The surrounding success from THR33 RINGZ, and past albums paved the way for T-Pain to even have his own app. The “I am T-Pain” allowed for fans to create quick snippets of voice recordings and videos with their voices being processed through auto-tune, furthermore increasing the association between the vocal effect and the singer.

After THR33 RINGZ success, T-Pain’s luster wore off. His 2011 album rEVOLVEr didn’t receive nearly the same amount of recognition as his past music. It did well, with songs like “5 O’Clock” and “Best Love Song”, but T-Pain seemed in uncharted territory. It seems T-Pain had attempted to change his sound, and rightfully so, but it stuck too close to the usual and norm of 2011 pop. It wasn’t outlandish or rashly sexual as his older music, which very well could have been on purpose. You hear a gentler, more mature T-Pain.

And so many fans have the idea that T-Pain is just a pawn of not just auto-tune, but restricting himself to naughty pop hits. He’s albums can often can so much more. Of course, his features on “Low” and his collab with Jamie Foxx’s “Blame It” are classics, instant drink bangers, but T-Pain actually does a wonderful job at combining standard R&B rhythms with impassioned, auto-tuned voice.

rEVOLVEr was his last album he released, but T-Pain still collaborates and remains active. In 2014, T-Pain performed on NPR’s Tiny Desk Concert series on YouTube. Garnering over 11 million views, the video inspired T-Pain to pursue a short acoustic concert. The auto-tune king is taking a break from his signature sound.

Thanks to rap artists such as Migos, Travis Scott, and Future, auto-tune will live on through many different facets. Though hardly up and coming, these rappers represent the future of rap and that auto-tune will remain a favored tool in the genre.

On Travis Scott’s “90210”, you hear an almost Kanye-like way on his use of auto-tune. Scott’s voice is noticeably more synthesized and joined with a bit more bass in the vocals themselves. The song makes for a mildly upbeat tempo. Scott’s voice loses the synth sound later in the song and reverts to his usual use of auto-tune, while piano, congas, and a distorted voice supply the driving energy for the piece.

You can hear lots of uses of auto-tune through Future’s albums HNDRXX, Future, among others. Particularly on a song like “Rent Money”, Future’s use of auto-tune sometimes seems more functional rather than featural. Future first hit the scene with his 2012 album Astronaut Status, and though his aesthetic seemed geared toward space-like feels, it hasn’t always been evident in the music. Nonetheless, like many artists, Future’s sound and individuality is inseparable from auto-tune.

It seems that voice alteration founds its way into rap and indie rock due to a few shared characteristics. One, both genres are inherently passionate. This is not to say that other forms of arts do render such passion, but there is something about both rap and indie rock that is so raw, pure, and idiomatic. Their art is unapologetic and unabashed. Opinions thrown out.

Secondly, the genres carry the same weight in importance of the voice. The voice is at the root of all tonal music and is the most important instrument within it. Traditional instrumentalists put in vast amounts of time to play as natural and clean as the human voice. It is what they aim for. And when the goal is human voice, what is more charismatic than altering the voice to accentuate itself?

Updated 5:27 PM EST Feb 19, 2020