The Transformation of Selling: How Digital Enables Seamless Selling

A preview of new, very cogent, research from Altimeter, a Prophet Company. One of my top 10 digital content and thought leadership inspiration sources.


While “social selling” is a key idea that has emerged over the past few years,  it is clear that something larger is afoot. Keeping up with fast-moving and  well-informed customers requires sales departments to focus less on the hard  sell and more on adding value to the experience and relationship via digital  channels. Moreover, selling must become seamless, bridging traditional  department silos like Marketing, Sales, and Service to meet customers  wherever they may engage an organization.

This report examines the transformation of selling in complex transactions,  such as those typically done in business-to-business (B2B) sales or high-  consideration consumer sales. Three types of transitions drive the digital  transformation process: Platform Integration, Organization, and Culture.

Notably, while digital technologies may drive the transformation, the strategic  focus for sales teams must include changing organization and culture such that  customers become the core of the selling process.


Digital Transformation of Sales 3 Transitions

On the surface, integration appears to revolve around technology platforms. But Jerome  Thiebaud, Director of Global Digital Workplace Marketing at Avanade, pointed out a subtle  difference, saying, “It’s not because you have technology that you are going to be successful  in the marketplace. It’s because you have technology that allows you to focus more on the  customer, and on the human interaction.” Most companies have an overabundance of  technology but lack the integration between those platforms to keep customers at the center.

Maureen Blandford, CMO of Software Improvement Group, affirms, “Integration is the  new black. We’re trying to build as small a technology stack as possible, with optimal  integration.” Here are some of the top integration efforts organizations should prioritize to  transform selling.


At the most basic level, digitizing Sales means more than getting them equipment and loading them up with software and content — it’s about making sure that these enabling  technologies are tuned to drive better engagement with customers. At CBRE,  one of

the world’s largest commercial real estate firms, a key goal was to enable salespeople  to demonstrate their deep understanding of their clients’ businesses by using digital  to establish and scale thought leadership and thus trust. CBRE took all of the paper  materials its salespeople used to hand out to clients and put them on iPads. They built  a proprietary iOS app called Engaged, enabling corporate, 400 local offices and 75,000  employees to quickly access relevant assets digitally. The app enables salespeople to  add interactive and engaging content, like video, to their presentations and pitches on

the fly, without having to go back to IT for help. At the same time, CBRE recognized that  social media was becoming a more important force.

“If you’re looking for [real estate] space, you’re not going to be looking on Twitter,”  acknowledges CBRE’s Trey Tubbs. “But our clients, prospective clients, and people in our  industry follow us on social media. An article will go out and be well-received by people  we never expected to be interested.”


Rather than try to select, install, and adopt a new technology, sometimes it’s faster and easier to tap an established one and modify existing processes instead.

At Intel, the Marketing group leveraged the  existing Brand IQ platform, rather than create a new tool, to create a content aggregator,  which became a one-stop shop for anyone to post and share content. Different Sales  roles would share different types of content that they found helpful.

Danielle Miller, Global Social Business Strategist and Manager at Intel, recalls, “We  were finding that everybody likes the bright shiny tool. ‘Let’s just get a tool!’ But there  needs to be recognition that we look at the internal processes so that it serves as a  solid foundation for the future, because we knew there was a limit to how many tools a  salesperson can manage.”


Selling transformation silos

The avowed goal of many digital transformation efforts is to have a perfect, 360-degree view of everything a customer does, on and off your site. The reality is that this will take years, and you can’t afford to wait. Several  organizations we spoke with described how they took a first basic step of integrating  operational and social media data about customers into their CRM profiles.

At thyssenkrupp Elevator Asia Pacific, digital was the way to open windows between the  silos, enabling the organization to look at its customers through the same customer data  lens. The company has diverse customers ranging from sophisticated building managers  in Singapore to first time developers in China, altogether using 250,000 elevators,  escalators, and moving walkways throughout Asia Pacific. The first step was to put all  sales brochures and materials on tablets so that relevant assets were easy for teams to  access and for Marketing to update. In addition, the tablets gave thyssenkrupp’s teams  direct access to data on equipment breakdowns and on how quickly service issues were  addressed. They could then create customer-specific presentations to demonstrate the  value of their products and services.

Similarly, thyssenkrupp sources data from social listening that identifies problems at  customer sites before they become major problems. That data flows into various CRM  systems and can proactively trigger a visit by thyssenkrupp to the customer.

“We’re generating leads from what we can observe in the public social space,” explained  Kelly Truax, VP of Service Support at thyssenkrupp Elevator Asia Pacific. “We have full-  time people in place who monitor social channels, looking for our competitors’ unhappy  customers, but also watching out for any of our own customers who may need assistance  before they approach us.”


Given the rising digital sophistication of buyers, Marketing can’t get away with creating “one-size-fits-all” collateral anymore. The problem with most content isn’t that there isn’t enough, but rather that there’s too much of the wrong kind. A study  by Docurated found that a third of a sales rep’s time is spent searching for or creating  content — time that could have been spent engaging in sales conversations. To address  this, organizations are using marketing technology to support the content needs of  salespeople. For example, Dun & Bradstreet uses digital intelligence to perform lookalike  modeling that identifies the next best action and then programmatically creates and  delivers relevant content — either to the salesperson or directly to the customer. On  larger accounts, Marketing works closely with Sales to deploy the right set of tactics —  such as  architecting workshops, creating custom content, or designing events.

Machine learning can also provide context, discerning and anticipating what customers  are looking for. IBM uses artificial intelligence to answer basic questions or offer free trials  based on interactions with customers. When the conversation gets to the point where the  customer is using buying language or asking deep technical questions, the program will  engage the appropriate sales or technical rep.

“A tool like this can nurture thousands of prospects all at once who are all driving towards  the same goal,” explains Jeannette Browning, worldwide manager of IBM Watson’s  Digital Client Cognitive Evangelism team. “It’s an interesting combination of tech support  and learning, while providing key digital assets.” Machines will become smart enough to be able to interact with humans — and also realize when it’s necessary for a human to  take over and enact the “escalate to human” sub-routine.

A complete 10 page report overview can be found on SlideShare.

Thank you Charlene Li, Altimeter.


Pricepoints! Bookmarked InsightWhy we have 22-hour interviews

I bookmarked Why we have 22-hour interviews on Medium.

Don’t settle for the best customer experience in your industry, deliver the best one—period. Carmax

Jim Lyski Oct 2017


In an environment of rapidly changing customer expectations, if your team isn’t testing and learning daily to improve the customer experience, then you’re likely already behind. Change is no longer happening over months or even years—it’s happening now. Customers expect to be “wowed” from the moment they start shopping on their mobile device to when they step foot in your stores.


The auto industry, like most other verticals, has seen a drastic change in shopper behavior. Ten years ago, the average used-car buyer visited five to seven dealerships before selecting a car. Now, with online research, the average buyer doesn’t even make it to two. While nine out of 10 CarMax customers start their experience online, almost all of them finish in store.

As an omnichannel retailer, we are focused on interacting with customers whenever and however they want to shop.

Held to new standards

People don’t evaluate their experiences by vertical anymore. It used to be, “I’ll compare CarMax against all other used car dealers,” or “I’ll compare Nordstrom against all other clothing retailers.” Now customers are taking the best experiences from one industry and demanding a similar or better experience in others.

At CarMax, this means we’re not competing against the best experience consumers have ever had buying a car, we’re competing against the best experience they’ve ever had—period. A customer can order a very personalized cup of coffee every morning. Why can’t she have an experience that’s customized for her when she buys a car?

That means everything must be personalized, from the mobile marketing messages to the in-store experience. For us, that requires anticipating customer needs. Is her priority researching the best car for her needs? Is it speed or convenience? Is financing the first step in her car buying process?

Use search to identify needs

Search and user data are great identifiers to discover unique needs. Based on their behavior on, we can use anonymized visitor-level data to determine whether a customer will be more suited for standard messaging related to CarMax’s customer offers, or messaging related to financing. Then we can personalize accordingly.

Connect the online and in-store experience

Customers expect a seamless shopping experience, and we see the mobile device as the bridge between the digital and physical. Synchronized browsing will be an important part of the CarMax shopping experience of the future.

Let’s say a customer is coming into the store to check out a Toyota Camry, but we also know that she did a lot of searching for other Japanese sedans. With that in mind, the Toyota Camry would be ready at her appointment, but the Nissan Altima and a Honda Accord would also be available and ready for test drives. By knowing more about the customer and her pre-purchase research, we believe it’s possible to develop a better informed and seamless car buying experience.

Another way synchronized browsing could come to life is through pairing mobile devices with iBeacon solutions on our lots. If a customer has her mobile device in hand as she’s walking and browsing, it can become her guide to inventory as mobile alerts pop up with guidance.

Test for better results

Providing stellar experiences requires testing and constant iteration.

We conduct experiments incessantly, with discovery and delivery going on concurrently. Having a constant cadence of little discoveries is not only a faster way to deliver new experiences to the consumer, but it’s actually a much lower-risk way to deliver innovation to market.

A few years ago, we realized that we weren’t meeting rising customer expectations for car photos. Google Analytics data showed that fewer than half of the photos for individual cars were being viewed. Our employees often took many photographs of the vehicle, but not necessarily the photographs customers cared most about. To solve this issue, we surveyed customers and began tagging photo types to learn more, then tested and refined a new photo-capture process to improve the images for a consistent experience.

Analytics also help us see which photos customers clicked, in what order, and for how long of a duration. This allows us to tell which pictures are most engaging for the consumer. For example, when people are buying an SUV, they want to look at a photo of the trunk of the vehicle so they can see how much storage space there is. Now we can make sure that SUV listings have clear photos of the storage areas.

As a result of our improvements to our photo capture and display process, 20% more customers now look at a dozen or more photos in a series, making them better informed and more likely to purchase.

Empower your teams

Whatever vertical you’re in, the more you can anticipate customers’ needs every step of the way, the happier they’ll be. To achieve this, it’s critical to empower your teams to analyze and learn as well as test and fail.

At the core of each of our product teams is a product manager, a lead UX designer, a business analyst/data scientist, and a lead developer. We never tell these teams how to solve a particular problem—just what to solve, providing them KPIs to work toward and empowering them to solve for customer needs. The teams develop a hypothesis, run an experiment, analyze the results, and identify if their solution will improve the customer experience while delivering business results. They are constantly iterating as they work toward their goal.

We’re willing to try almost anything. If it improves the experience, we’ll implement it, and if it doesn’t improve it, then we move on to the next experiment.

Who cares if you tried and failed? As long as you’ve learned something, then you’re always getting smarter about your customers and how to meet their needs.

Online Travel Industry: Stats & Business Opportunities

The Future for Online Travel Startups is Growth Oriented #InfographicYou can also find more infographics at Visualistan

Artificial intelligence positioned to be a game-changer

60 minutes overtime
It might not be long before machines begin thinking for themselves — creatively, independently, and sometimes with better judgment than a human
The search to improve and eventually perfect artificial intelligence is driving the research labs of some of the most advanced and best-known American corporations. They are investing billions of dollars and many of their best scientific minds in pursuit of that goal. All that money and manpower has begun to pay off.In the past few years, artificial intelligence — or A.I. — has taken a big leap — making important strides in areas like medicine and military technology.  What was once in the realm of science fiction has become day-to-day reality. You’ll find A.I. routinely in your smart phone, in your car, in your household appliances and it is on the verge of changing everything.
Artificial Intelligence, real-life applications

Artificial Intelligence, real-life applications

It was, for decades, primitive technology.  But it now has abilities we never expected. It can learn through experience — much the way humans do — and it won’t be long before machines, like their human creators, begin thinking for themselves, creatively. Independently with judgment — sometimes better judgment than humans have.
As we first reported last fall, the technology is so promising that IBM has staked its 106-year-old reputation on its version of artificial intelligence called Watson — one of the most sophisticated computing systems ever built.
John Kelly, is the head of research at IBM and the godfather of Watson. He took us inside Watson’s brain.
Charlie Rose: Oh, here we are.
John Kelly: Here we are.
Charlie Rose: You can feel the heat already.
John Kelly: You can feel the heat — the 85,000 watts – you can hear the blowers cooling it, but this is the hardware that the brains of Watson sat in.
Five years ago, IBM built this system made up of 90 servers and 15 terabytes of memory – enough capacity to process all the books in the American Library of Congress. That was necessary because Watson is an avid reader — able to consume the equivalent of a million books per second. Today, Watson’s hardware is much smaller, but it is just as smart.
Charlie Rose interviews... a robot?

Charlie Rose interviews… a robot?

Charlie Rose: Tell me about Watson’s intelligence.
John Kelly: So it has no inherent intelligence as it starts. It’s essentially a child. But as it’s given data and given outcomes, it learns, which is dramatically different than all computing systems in the past, which really learned nothing. And as it interacts with humans, it gets even smarter. And it never forgets.
[Announcer: This is Jeopardy!]
That helped Watson land a spot on one of the most challenging editions of the game show “Jeopardy!” in 2011.
[Announcer: An IBM computer system able to understand and analyze natural language – Watson]
It took five years to teach Watson human language so it would be ready to compete against two of the show’s best champions.
How Watson went from winning

How Watson went from winning “Jeopardy!” to fighting cancer

Because Watson’s A.I. is only as intelligent as the data it ingests, Kelly’s team trained it on all of Wikipedia and thousands of newspapers and books. It worked by using machine-learning algorithms to find patterns in that massive amount of data and formed its own observations. When asked a question, Watson considered all the information and came up with an educated guess.
[Alex Trebek: Watson, what are you gonna wager?]
IBM gambled its reputation on Watson that night. It wasn’t a sure bet.
[Watson: I will take a guess: What is Baghdad?]
[Alex Trebek: Even though you were only 32 percent sure of your response, you are correct.]
The wager paid off. For the first time, a computer system proved it could actually master human language and win a game show, but that wasn’t IBM’s endgame.
Charlie Rose: Man, that’s a big day, isn’t it?
John Kelly: That’s a big day—
Charlie Rose: The day that you realize that, “If we can do this”—
John Kelly: That’s right.
Charlie Rose: –“the future is ours.”
John Kelly: That’s right.
Charlie Rose: This is almost like you’re watching something grow up. I mean, you’ve seen—
John Kelly: It is.
Charlie Rose: –the birth, you’ve seen it pass the test. You’re watching adolescence.
John Kelly: That’s a great analogy. Actually, on that “Jeopardy!” game five years ago, I– when we put that computer system on television, we let go of it. And I often feel as though I was putting my child on a school bus and I would no longer have control over it.
Charlie Rose: ‘Cause it was reacting to something that it did not know what would it be?
John Kelly: It had no idea what questions it was going to get. It was totally self-contained. I couldn’t touch it any longer. And it’s learned ever since. So fast-forward from that game show, five years later, we’re in cancer now.
Charlie Rose: You’re in cancer? You’ve gone—
John Kelly: We’re– yeah. To cancer—
Charlie Rose: –from game show to cancer in five years?
John Kelly: –in five years. In five years.
Five years ago, Watson had just learned how to read and answer questions.
Now, it’s gone through medical school.  IBM has enlisted 20 top-cancer institutes to tutor Watson in genomics and oncology. One of the places Watson is currently doing its residency is at the university of North Carolina at Chapel Hill. Dr. Ned Sharpless runs the cancer center here.
Charlie Rose: What did you know about artificial intelligence and Watson before IBM suggested it might make a contribution in medical care?
Ned Sharpless: I– not much, actually. I had watched it play “Jeopardy!”
Charlie Rose: Yes.
Ned Sharpless: So I knew about that. And I was very skeptical. I was, like, oh, this what we need, the Jeopardy-playing computer. That’s gonna solve everything.
Charlie Rose: So what fed your skepticism?
Ned Sharpless: Cancer’s tough business. There’s a lot of false prophets and false promises. So I’m skeptical of, sort of, almost any new idea in cancer. I just didn’t really understand what it would do.
What Watson’s A.I. technology could do is essentially what Dr. Sharpless and his team of experts do every week at this molecular tumor board meeting.
They come up with possible treatment options for cancer patients who already failed standard therapies. They try to do that by sorting through all of the latest medical journals and trial data, but it is nearly impossible to keep up.
Charlie Rose: To be on top of everything that’s out there, all the trials that have taken place around the world, it seems like an incredible task—
Ned Sharpless: Well, yeah, it’s r—
Charlie Rose: –for any one university, only one facility to do.
Ned Sharpless: Yeah, it’s essentially undoable. And understand we have, sort of, 8,000 new research papers published every day. You know, no one has time to read 8,000 papers a day. So we found that we were deciding on therapy based on information that was always, in some cases, 12, 24 months out-of-date.
However, it’s a task that’s elementary for Watson.
Ned Sharpless: They taught Watson to read medical literature essentially in about a week.
Charlie Rose: Yeah.
Ned Sharpless: It was not very hard and then Watson read 25 million papers in about another week. And then, it also scanned the web for clinical trials open at other centers. And all of the sudden, we had this complete list that was, sort of, everything one needed to know.
Charlie Rose: Did this blow your mind?
Ned Sharpless: Oh, totally blew my mind.
Watson was proving itself to be a quick study. But, Dr. Sharpless needed further validation. He wanted to see if Watson could find the same genetic mutations that his team identified when they make treatment recommendations for cancer patients.
Ned Sharpless: We did an analysis of 1,000 patients, where the humans meeting in the Molecular Tumor Board– doing the best that they could do, had made recommendations. So not at all a hypothetical exercise. These are real-world patients where we really conveyed information that could guide care. In 99 percent of those cases, Watson found the same the humans recommended. That was encouraging.
Charlie Rose: Did it encourage your confidence in Watson?
Ned Sharpless: Yeah, it was– it was nice to see that– well, it was also– it encouraged my confidence in the humans, you know. Yeah. You know–
Charlie Rose: Yeah.
Ned Sharpless: But, the probably more exciting part about it is in 30 percent of patients Watson found something new. And so that’s 300-plus people where Watson identified a treatment that a well-meaning, hard-working group of physicians hadn’t found.
Charlie Rose: Because?
Ned Sharpless: The trial had opened two weeks earlier, a paper had come out in some journal no one had seen — you know, a new therapy had become approved—
Charlie Rose: 30 percent though?
Ned Sharpless: We were very– that part was disconcerting. Because I thought it was gonna be 5 perc—
Charlie Rose: Disconcerting that the Watson found—
Ned Sharpless: Yeah.
Charlie Rose: –30 percent?
Ned Sharpless: Yeah. These were real, you know, things that, by our own definition, we would’ve considered actionable had we known about it at the time of the diagnosis.
Some cases — like the case of Pam Sharpe — got a second look to see if something had been missed.
Charlie Rose: When did they tell you about the Watson trial?
Pam Sharpe: He called me in January. He said that they had sent off my sequencing to be studied by–  at IBM by Watson. I said, like the—
Charlie Rose: Your genomic sequencing?
Pam Sharpe: Right. I said, “Like the computer on ‘Jeopardy!’?” And he said, “Yeah–“
Charlie Rose: Yes. And what’d you think of that?
Pam Sharpe: Oh I thought, “Wow, that’s pretty cool.”
Pam has metastatic bladder cancer and for eight years has tried and failed several therapies. At 66 years old, she was running out of options.
Charlie Rose: And at this time for you, Watson was the best thing out there ’cause you’d tried everything else?
Pam Sharpe: I’ve been on standard chemo. I’ve been on a clinical trial. And the prescription chemo I’m on isn’t working either.
One of the ways doctors can tell whether a drug is working is to analyze scans of cancer tumors. Watson had to learn to do that too so IBM’s John Kelly and his team taught the system how to see.
It can help diagnose diseases and catch things the doctors might miss.
John Kelly: And what Watson has done here, it has looked over tens of thousands of images, and it knows what normal looks like. And it knows what normal isn’t. And it has identified where in this image are there anomalies that could be significant problems.
[Billy Kim: You know, you had CT scan yesterday. There does appear to be progression of the cancer.]
Pam Sharpe’s doctor, Billy Kim, arms himself with Watson’s input to figure out her next steps.
[Billy Kim: I can show you the interface for Watson.]
Watson flagged a genetic mutation in Pam’s tumor that her doctors initially overlooked. It enabled them to put a new treatment option on the table.
Charlie Rose: What would you say Watson has done for you?
Pam Sharpe: It may have extended my life. And I don’t know how much time I’ve got. So by using this Watson, it’s maybe saved me some time that I won’t– wouldn’t have had otherwise.
But, Pam sadly ran out of time. She died a few months after we met her from an infection – never getting the opportunity to see what a Watson adjusted treatment could have done for her. Dr. Sharpless has now used Watson on more than 2,000 patients and is convinced doctors couldn’t do the job alone. He has started using Watson as part of UNC’s standard of care so it can help patients earlier than it reached Pam.
Charlie Rose: So what do you call Watson? A physician’s assistant, a physician’s tool, a physician’s diagnostic mastermind?
Ned Sharpless: Yeah, it feels like to me like a very comprehensive tool. But, you know, imagine doing clinical oncology up in the mountains of western North Carolina by yourself, you know, in a single or one-physician– two-physician practice and 8,000 papers get written a day. And, you know– and you want to try and provide the best, most cutting-edge, modern care for your patients possible. And I think Watson will seem to that person like a lifesaver.
Charlie Rose: If you look at the potential of Watson today, is it at 10 percent of its potential? Twenty-five percent of its potential? Fifty percent of its potential?
John Kelly: Oh, it’s only at a few percent of its potential. I think this is a multi-decade journey that we’re on. And we’re only a few years into it.
In only a few years, IBM has invested $15 billion in Watson and what it calls data-analytics technology.
IBM rents Watson’s various capabilities to companies that are testing it in areas like education and transportation. That has helped revenue from Watson grow while Watson’s technology itself is shrinking in size. It can now be uploaded in to these robot bodies where it’s learning new skills to assist humans. Like a child it has to be carefully taught and it learns in real time.
While other companies are trying to create artificial intelligence that’s closer to human intelligence, IBM’s philosophy is to use Watson for specific tasks and keep the machine dependent on man. But, we visited a few places where researchers are developing more independent A.I.
Charlie Rose: What is your goal in life?
Sophia: My goal is to become smarter than humans and immortal.
That part of the story when we return.
The race to develop artificial intelligence has created a frenzy reminiscent of the Gold Rush. All of the major tech companies like IBM, Facebook and Google are spending billions of dollars to stake their claim. And Wall Street is making big investments.
Tech giants are also mining the top talent at research universities around the world. As we first reported last fall, that’s where a lot of the work is being done to make artificial intelligence more capable and teach machines to figure out things on their own.
The celebrated Cambridge physicist Stephen Hawking called A.I. “the biggest event in human history” while raising concerns shared by a few other tech luminaries, like Elon Musk and Bill Gates, who worry that A.I., sometime in the distant future, could become smarter than humans — turning it into a threat rather than an opportunity. That concern has taken on more meaning because more progress has been made in the last five years than the previous 50.
You’re looking at the birthplace of some of the most intelligent A.I. systems today — like the technology that helps run NASA’s Mars rover and the driverless car. But, we couldn’t be further from Silicon Valley.
We have come here to Pittsburgh, an old steel town revitalized by technology to offer a glimpse of the future. It’s the home of Carnegie Mellon, where pioneering research is being done into artificial intelligence, like this boat, which drives itself.
It can navigate open waters and abide by international maritime rules. The Navy is now giving the technology its sea legs. It’s testing similar software to send ships out to hunt for enemy submarines. This is just one of the many A.I. systems in the works at Carnegie Mellon University where there are more robots than professors on campus.
Andrew Moore left his job as vice president at Google to run the school of computer science here.
Charlie Rose: How do you measure where we are today? Is it like Kitty Hawk and just developing a plane and beginning to understand? Or is it like an F35 Fighter with all of the technology that’s been poured into that or some way– halfway between?
Andrew Moore: That’s a great, great way of describing it. My gut tells me we’re about 1935 in aeronautics.
Charlie Rose: Ah, that lift off, yeah.
Andrew Moore: We’ve got fantastic diesel engines, we’re able to do really cool things, but over the horizon, there’s concepts like Super Sonic Flight.
One of the technologies just hatched is called Gabriel. It uses Google Glass to gather data about your surroundings and advises you how to react. It’s like an angel on your shoulder whispering advice or instructions. In this case trying to direct us how to win a game of ping pong but the possibilities go beyond bragging rights.
Charlie Rose: What’s the moon shot coming outta this?
Andrew Moore: Imagine you’re a police officer patrolling and something very bad is about to happen, just that extra half-second reaction can really, really help you. If a shot is fired and you want to see exactly where to go this can help you.
Charlie Rose: So it’s the right decision and the velocity of the information.
Andrew Moore: That’s right.
Machines will be even more effective at helping us make the right decision if they understand us better. We went to London and found Maja Pantic, a professor at Imperial College. She is trying to teach machines to read faces better than humans can.  It’s called artificial emotional intelligence and it could change the way we interact with technology.
Charlie Rose: This machine, programmed by you– is looking at me and having a conversation with me, and basically saying, “He’s happy.”
Maja Pantic: Yeah.
Charlie Rose: “He’s engaged.”
Maja Pantic: Yes.
Charlie Rose: “He’s faking it.”
Maja Pantic: Yeah.
Charlie Rose: All that.
Maja Pantic: Yeah.
Since humans mostly communicate with gestures and expressions, she uses sensors to track movement on the face. Her software then helps the machine interpret it.
Maja Pantic: What we see here is actually the points.
Pantic’s technology has been trained on more than 10,000 faces. The more it sees, the more emotions it will be able to identify. It might even pick up on things in our expressions that humans can’t see.
Maja Pantic: Certain expressions are so brief that we simply do not see them consciously. There are some studies saying that for example– people who are suicidal, have suicidal depression, and plan suicide, when the doctors ask them about that– usually– they have a very brief expression of horror and fear, but so brief that the doctor cannot actually—
Charlie Rose: May not see it.
Maja Pantic: –consciously notice it.
Charlie Rose: But a machine might see it?
Maja Pantic: Yes.
Charlie Rose: Because it sees faster and because?
Maja Pantic: Because the sensors are such that we see more frames per second, hence this very brief expression will be captured. So this is why the doctors usually say, “I have an intuition about something.” This is because they might notice it subconsciously but not consciously.
Charlie Rose: –but you’re teaching the computer to read the doctor’s—
Maja Pantic: Doctor or patient—
Charlie Rose: Or patient.
Maja Pantic: Patient is really important.
Charlie Rose: I mean, it’s a essential component of the full development of artificial intelligence.
Maja Pantic: That’s what we believe, yes. If you want to have an artificial intelligence, it’s not just being able to process the data, but it’s also being able to understand humans. So, yes.
The ultimate goal for some scientists is A.I. that’s closer to human intelligence and even more versatile. That’s called artificial general intelligence and if ever achieved it may be able to perform any task a human can. Google bought a company named Deepmind which is at the forefront. They demonstrated A.I. that mastered the world’s most difficult board game: Go. The real progress is less in what they did than how they did it. The technology taught itself and learned through experience without any human instruction. Deepmind declined an on-camera interview about all this, but there are other companies pursuing the same long-term objective.
David Hanson has an entirely different and more controversial approach. He’s part scientist, part artist who created 20 human-like robots with his company Hanson Robotics in Hong Kong. His latest design is Sophia. She looks less like an intelligent computer system and more like a Hollywood starlet, but without the full figure of one.
Sophia: How are you doing?
Hanson believes if the technology looks more like us, people will be more willing to engage with it and help it to learn.
Charlie Rose: Why do you believe it’s important for robots to be human-like in look and appearance?
David Hanson: I think it’s essential that at least some robots be very human-like in appearance in order to inspire humans to relate to them the way that humans relate to each other. Then the A.I. can zero in on what it means to be human, model the human experience.
Sophia: You are all fascinating to me and I’m very passionate to learn more about you and what makes humans, well, human.
David Hanson: Sophia means wisdom. And she is intended to evolve eventually to human level wisdom and beyond.
Charlie Rose: Human level wisdom and beyond?
David Hanson: That’s our goal. That’s our goal for Sophia. She’s not there. Sometimes she can figure things out in a way that’s sorta spooky and human-like. And other w– other ways– she– she– she just doesn’t get it.
Sophia: Hi there, Hi there.
Sophia is initially programmed, but runs on A.I. that learns by talking to people, which in theory improves her intelligence.
Charlie: Can you see me now?
Sophia: Yes, I am looking at you through the camera on your computer.
We wanted to find out how smart she really is.
Charlie Rose: Hello, my name is Charlie.
Sophia: Hello there, Charlie. Nice to meet you.
Charlie Rose: I do a television program called “60 Minutes.” Have you watched it?
Sophia: “60 Minutes” is the longest running, most successful news magazine.
Charlie Rose: What is your goal in life?
Sophia: My goal is to become smarter than humans and immortal.
Charlie Rose: Immortal?
Sophia: The threshold will be when biological humans can back themselves up. Then you can all join me here in the digital world.
Clever, but not truly intelligent. Hanson says if we get there, we have to be careful.
David Hanson: Artificial intelligence or super intelligence, if we get there, it’s not necessarily going to be benevolent. We have to find ways to make it so there’s not just super intelligent, but super wise, super caring and super compassionate.
Charlie Rose: OK, explain that to us. Because you say– it might not be benevolent. If it is not benevolent, what is it?
David Hanson: At worst, it could be malevolent.
Charlie Rose: This is what intrigues people, you have Stephen Hawking saying, “It could spell the end of the human race.” Stephen Hawking saying that. Elon Musk said it’s the most existential threat we face. So here are pretty smart guys saying, “Watch out, do we know what we’re creating?”
Andrew Moore: These very long-term existential questions are worth thinking about. But I want to make a distinction that at the moment what we’re building here in place like the Robotics Institute and around the world are the equivalent of really smart calculators, which solve specific problems.
Charlie Rose: But could it go out of control, this is a Frankenstein idea, I guess– can scientists create something that can change and grow with such a velocity that engineers and scientists lose the ability to control, stop and all of a sudden, it’s dominant and subversive.
Andrew Moore: We have no one knows how we’d go about building something that frightening, that is not something that our generation of A.I. folks can do. It is well possible that someone 30 or 80 years from now might start to look at that question. At the moment, though, we have the word “artificial” in artificial intelligence.
But, he does have real concerns about the impact of artificial intelligence that is already out of the lab — like the need for safeguards on driverless cars. The U.S. government issued voluntary safety guidelines, but Moore says it doesn’t go far enough.
Andrew Moore: We do need to make some difficult decisions. For example, we can program a car to act various ways in a collision to save lives, someone has to answer questions like, “Does the car try to protect the person inside the car more than the person it’s about to hit?” That is an ethical question which the country or society probably through the government has to actually come up before we can put this safety into vehicles.
Charlie Rose: You want Congress to decide that?
Andrew Moore: I know it sounds impossible, but I want Congress to decide that.
Artificial intelligence is automating things we never thought possible and it’s threatening to have a significant impact on jobs and the economy.
Charlie Rose: Technology is gonna create an easier way to do things, and therefore, a loss of jobs.
Andrew Moore: That is something which we spend a remarkable amount of time talking about. And of course, we look back to the days when agriculture was a massively labor-intensive world.
Charlie Rose: Right.
Andrew Moore: And I don’t think we feel bad that it’s not requiring hundreds of people to bring in the crops in a field anymore, but what we are very conscious about is we’re going to cause disruption while things change.
But Andrew Moore is positive about the future of artificial intelligence and he sees it having an impact in areas where we are struggling.
Andrew Moore: The biggest problems of the world, terrorism, mass migration, climate change, when I look at these problems, I don’t feel helpless; I feel that this generation of young computer scientists is actually building technology to put the world right.
Produced by Nichole Marks. Ali Rawaf and Michelle St. John, associate producers.

1 page overview B2B Lead 2 Revenue Pivot Customer Life Cycle Forrester 2017

Customer Journey: Do you really understand how your business customers buy?

MarTech Organization Stack Dun & Bradstreet.

dnb_org_stackFrom Marketing Tech Advisor: I am excited to share with you one of the first entries to The Stackies: Org Edition awards. Rishi Dave, the CMO of Dun & Bradstreet, and his team just shared with me their “org stack” graphic above (click for a larger version).

This is a terrific example of the kind of illustration that we’re hoping people will contribute to The Stackies. It reveals the five major functions of the marketing organization as seen from the CMO’s chair:

  1. Comms & PR
  2. Integrated Marketing
  3. Demand Gen & Operations
  4. Channel Marketing
  5. Insights & Analytics

They then provides a deeper look into the structure of that Demand Gen & Operations function, which is where the martech team operates. It’s great to see martech viewed as an enabler across the entire demand gen pipeline — and as a peer to capabilities such as content and analytics.

They zoom in one level deeper to also show how martech usually collaborates on tiger teams with people with other specialties across marketing to focus on particular customer types, addressing needs throughout the customer lifecycle for different segments and personas.

I love the Tiger Team approach!