ISPP 2015

Career Opportunities in Pharmacy
Precision Health Engineering – Designing Health-Centered Mundane Technology to Increase Adherence


[MUSIC]>>It is my pleasure
today to welcome back Pablo Paredes who was an
intern in my group in 2011.>>2013.>>Well, okay. 2013 was when
we published the papers. But he was a wonderful colleague for us and many of us
enjoyed working with him. He now is a faculty member at Stanford University in
the medical school. He actually started a
lab called the pervasive well-being technology lab which
is near and dear to our hearts. You can be talking about precision
health engineering today. So I can’t wait to hear when
he’s up to. So thank you.>>All right. Well, thank you so much Mary for having
me here and thanks for attending this
talk. I’m very excited. So I’m here to talk about precision health engineering
and I hope I can convey the message of what’s the new wave of understanding of
what health means in a seconds. So on a daily basis, you will see like people who are like overworked or a tired, angry, overweight, unmotivated, distracted, unhappy,
feeling isolated. This will be the dark side of
what mental health looks like. There’s like one in four people who have some problems
with mental health. We’re talking about $2.5
trillion per year problem, 1.7 of those are due to productivity
and absenteeism issues. So this is a serious thing, and we really need to
understand how to attack this. So when I talk about
precision health, I’m going to focus a bit more on
the mental health aspects today. It’s definitely a brother
concept but I will be focusing slightly on
mental health and stress. So I’m going to be
giving a definition. I want to basically say that
precision health is not precision medicine and I’m going
to explain that in a second. I’m going to be also giving
an overview of more applied say perspective of
precision health which will be what I call mundane
stress management. Basically, how do we use
very mundane objects like PC mouse, or displays, a car or things like messaging apps or just any other apps that you use on a daily basis in the office, the car, in the home,
and how to create, how to do research across
these different environments, which is what I’ve been doing so far. So precision health, one definition of precision health is the practice of personalized health. So you want to personalize
things and you want to basically understanding health. But what is health? Like let’s understand
what health means. If you look at the
definition of heath by the World Health Organization
is not about sickness where like overwhelmed
by the perspective that health is about sickness
and it’s not about sickness. It’s [inaudible] sickness,
but so far it’s been like too much emphasis on sickness
and infirmity and that’s great. I mean, we have to basically
support this type of research but not enough on like how
do we keep your well-being, your physical mental social
abilities up to par, how to monitor the baseline of this. So when I say that precision
health not precision medicine, what I basically mean is like a good analogy would be
like if you look at humans, they basically have a
sampling frequency of one visit per year and
sometimes even less than that to try to monitor and
try to do detection and try to do prediction and prognosis, and most often than not why we gets laid detection
of a bunch of problems. Now, if you look at the jet engine in the aerospace industry,
it’s very different. You have continuous monitoring
hundreds of sensors, hundreds of super high
sampling frequency, and that’s what keeps this like
high availability industry going. So the hope is that we can transform that and have
their human being monitored constantly and being able to really have early failure detection
and prevention of individuals. So that’s what precision health is. It’s more about the baseline, than any type of anomaly. I will say sadly the med school, and med schools in general don’t really know how to
deal with this problem. Because they had been
studying sick people. They have not been studying
healthy people by the definition. So at the end of the day, we still need to know
what are the means to do these monitoring on a daily basis, how to measure often
and for a long-term, and this is not easy because not just about putting sensors on people. It’s like how do we get people
to stick with those sensors? How do we get people to basically understand the value
of like prevention, and prediction, and
pro-active medicine. So if you look at the state
of the art of sensing, we focus on the sensing part first. We have like the
wearables are everywhere, but there’s a huge drop-out
rates and this is what I am trying to say about like adaption. We have great sensors and
some things are happening. This is actually the med
school, these implantable. But the drop-out rates
are still pretty large, and many of them are not
because they learned to do something or they are
better at using wearables, because they forget to charge it or things like that or
they just don’t like it. You can also say like well, how’s the sensing in other
areas like mental health. Sadly, the state of the art
continues to be subjective1` monitoring. Asking question to people, more often, less often, whatever but asking question is not the best way to collect data and it’s also very tiring
for the individual. Finally, if you say like well, you can do electrodermal activity, heart rate variability or
like new modes of sampling. They continue to be obtrusive
because you have to learn to wear these things and you have
to basically have to charge them and do these
other types of things. So our hope is that you
know we can move beyond sensing with wearables
and sensing with questions and then start looking
at their environment as well. This is one of the corners of
the projection of why they could be the future of precision. I was like how do we use
houses and very mundane things around us to generate these let’s call it Health Center Internet
of Things, if you must.>>Pablo?>>Yeah.>>How often if you go back a slide, how often do you have to charge those shirts and can
you wash them yet?>>Yeah, not really and often depends on the type
of sensing that you do. Many people advertise
heart rate variability, and what they do is
they sends once a day or only when you’re sleeping
or when you’re seating. So the reality is that if you really want to have these
continuous monitoring, there’s no good battery solutions yet for these continuous monitoring. We haven’t solved that problem. I’m wearing this ring for example, which is actually useful
for sleep tracking, and it’s not that I
have insomnia I needed. But it works only at
night, and it’s good. It’s a use case but the rest of the days doesn’t really know
what’s happening with me. But we’re getting close, but still form-factor
method I do wear jewelry. So it’s better than
wearing some night, don’t wear the wearables except
for this one because I have a torn ligament toward something. But that’s a different, that’s a
sickness, that’s health thing. So anyway, at the end of the day
yeah, that’s a good question. But it is not there yet. So can we move not only from sensing, can we also use the infrastructure to do
that even behavioral nudges. How do we maximize
adoption and adherence? Can we get to the point that we
can really like push that in. My hope is that we can understand
why is this important. If we look at this graph that I created between like while I consider is obtrusive and what I
consider is engaging, I would say like micro interventions and wearables should be like and that’s so high, not so low obtrusive speaking. They are actually showed
them the engagement. But they choose lead, they die rather quickly. The wearable, intervention
people use it and then they discard it and they don’t keep
engaged for the long-term. Sorry, we will say like nudges
in IoT might actually give us a bit less obtrusiveness
because we’re not asking you to wear anything and maybe we’ll increase engagement. Of course, if you get to
the point that we actually do like implant say and
configural environments, we will get to the point
that it might actually have higher engagement because
it’s really in your body now. But it’s arguable how
obtrusive this can be.>>It also, it’s not clear. Start to design the intervention
to make people use them, right?>>You have to design
interventions to get people to use
that [inaudible] and delivered value over the long term and that’s actually
part of the challenge. So in this talk, I’m going to talk a bit
more about this subset of the ecosystem or
potential interventions. I’m not going to talk
a lot about this one even though I’ve done some research, but I want to discuss
some ideas on this area. There are concepts that I also
want to bring as a baseline of this conversation is this idea
of what is health for people? Why do healthy people
think about health? The reality is that healthy
people or high level of health don’t really have a huge motivation to
stay or become healthy. They are motivated to be beautiful, to be interactive, to be
attractive, to be smart, but they are nothing
like I’m going to keep my cardiovascular health, I’m going to keep my
mental health up today. That’s not how people think. They think about the
products of being healthy, not about health and themselves. It’s only when
deterioration happens that people start really getting when
you’re health level goes down, that motivation goes
up and were like okay, now I really care about health. Because something broke, because
something’s not going well, so health becomes
fundamentally important. But do you know precision
health works with these people. When I talk to my colleagues
in the med school, it’s like these are not
patients, these are individuals. These are patients,
these are individuals. They have very different
levels of motivation; therefore, also a very
different psychology. So you cannot design the
same way that you design for these people or these
other ones over here. What we hope is that
we can create, engage, and design to bring people, keep them healthy and find the sweet spot between
engagement and efficacy. You have to be efficacious about sensing and about the
interventions that you deliver, but you have to also be engaging. I call it a sweet
spot because I think it’s another trivial way of finding a very efficacious
sensory intervention and very engaging over the long term; these are challenging problems. So one approach that I’ve taken
through my formation and through my research is combining a few philosophies to try
to do something about it. One is design thinking. For the creative aspects, combine it with the
scientific method in order to postulate hypotheses, but once you have also
explored what are the needs of the user in order to be creative but also to have validated ways of
sensing and intervening. But for those of you who have
experience in both ends, the people who practice
the philosophy don’t necessarily mixed that well. Sadly even though they shouldn’t, somehow they just found
themselves at odds many times in the way that they
formulate questions and problems. I think we should definitely train people to think in
the intersection of these things to create this engaging
and efficacious sweet spot. I also use tools like
machine-learning, sometimes it’s very important. I consider it more like an aide
rather than objective even though it’s the emphasis right now is
a lot about machine learning. But then the other
complaint that’s also very important to me
is implementation. There’s a huge field
that’s nascent in the public health realm called
implementation science. There’s an average
of 17 years to get a good validated finding in the med school in the actual
market like 17 years, even though it’s been validated
for many, many iterations. So we’re not really that great at doing implementation
in the clinical world, much less implementation
outside in the health domain. In other domains, they are very
good like Microsoft, Google. They all know how to
implement problems that lead in their work for a long-term. That’s what we need to learn from each other
in order to keep doing all these things and machine
learning helps us with the predictability and
implementation science with this capability. So I’m going to tell you a couple
of projects during this talk. One is the sensorless sensing, which like bring some ideas that
we created with sensing and AI, bring it from different domains
like the office into the car and then eventually also get it
to work actually in a real car, in a real environment. So we keep evolving
towards implementation. Then this other domain, these are broad projects in my lab is called subtle interventions
which started more under design thinking
area got validated in a simulator and now we’re actually have revalidated
it on the road. See, revalidation is a
very interesting word. I’m going to just make a small pass is that if you go to the med school, I remember the first
thing that I got asked when I joined the med school
almost two years ago is like, “Has your things being revalidated?” I’m like, “No, because CHI only wants us to publish the most innovative
things,” and like “Well, once you revalidate it
three times then we talk.” That’s the med school. They don’t just throw things that have not
been validated and revalidated. For them, revalidation
is very important. So anyway, the precision
health design space for me also has another challenge. So combining those things I told you, I think there’s two types of tasks that you should go
to do precision health. One is the exploration part, and then the in-depth
validation that we discussed, but these are two different types of people and two different
types of projects. So in my lab, I have projects
that I will consider are more shallow and they are in the creativity phase and I have
people who are very creative. But then I have projects that
I go in-depth and revalidate and will agree to study with very different type
of people, for example, the sensorless sensing, the type of person that
I’m hiring that was like an expert in biomechanical
manipulation of the body. He doesn’t know about this sign but he knows exactly how the body moves. Because we’re revalidating
the things that we have been finding about that or in a
subtle intervention, also, we’re hiring people who experts
in machine learning and IoT to deliver new interventions while I have designers and other
people doing that thing. So the combination
of these two things I think it makes the
lab very interesting, makes it also hard to manage. It’s a very complicated lab to manage because you have
people who are like, “Let’s just scratch
the surface a lot.” I was like, “As long as
you don’t prove this, I will never believe
what you’re telling me.” So it’s a very interesting
creative tension, lets call it that way. I think Microsoft research is
one of those places where you’ll also find a lot of these
things, super exciting. So very briefly about
me just for the sake of explaining why am I doing
this thing? Where I come from? I gave up my business career because of family experience
in mental health and I’ve been studying mental health
ever since I joined the PhD at Berkeley and now I’m doing what I like these wellbeing
and precision health, but I’m a CS PhD working
in a med school. So that’s an interesting combination. I was hired to be part
of these new center called the Precision Health and
Integrated Diagnostics Center, a broad center where my lab resides, and it’s about trying to discuss these problems like how
do we design new things? How do we do the behavioral
or economics aspect? How do we do the economics of
adoption of proactive health? It’s a very, very new way of
thinking for the med school, they are learning how to do it. Even with me, I was like, we still don’t understand
you that well but we’re starting to get
something out of you. Just to clarify, when I say
wellbeing is a very broad topic, there’s a lot of things. So what I’m going to do is
oversimplify tremendously in the focus for this talk and talk
mostly about stress management. That’s not what it means
what wellbeing means. Wellbeing is a very complicated
large construct that’s been discussed in many ample
terms in public health, but I’m going to focus
on stress management. So let me tell you about
stress management as an operationalization of
the research that I do. So remember all these
people that I show you, they went through all these things. So what do these have in common? I don’t know if you can help me
find something that’s common. If you look at them whether they
happy or angry or whatever, most of these people also suffer of stress or arousal, economic arousal. It’s a driver for pretty
much on any other emotion. The truth is that stress right now, it’s becoming a really
complicated problem. In America alone it’s $300 billion. Eighty percent of adults
have reported to have issues with stress and not being
able to deal with the stress, and it basically affects
any part of your life. If you read the fine print, “Something’s going to
break in your body.” If you talk to an internist or internal medicine person will tell
you something’s going to break. We don’t know yet what’s going
to break and that’s something that we have to actually start understanding better
with more sensing. But there’s another commonality
also across these people. I don’t know if you
can spot what else is common across all
these pictures here. Something specific? No? Well, these people are indoors. So we became an indoors species. We were not designed
to be indoors species. Look at our arms and our legs. We’re roamers, gatherers, hunters. We’re designed to move, and we spend ourselves sitting. So you want to find a human, find a chair. Very easy. So 93 percent of the time indoors, that means that this is
not our new ecosystem. So at the end of the day, we have to attack stress
in its ecosystem. So anyway, stress it’s very
interesting thing that we talk about challenge and threat and how can we cope with that
thing? Can we do it? That’s how our body produces resources and he gets
like tense because we’re like mammals then we’d
produce muscle tension to get out. We have the Yerkes-Dodson
law that tells us that we need some level
of stress not too much not too little to operate
and it’s been proven mostly with animals and a
little bit with humans. Then Sapolsky that
tells us that stress is very useful and zebras don’t get ulcers because they use it very well. Once the lion come, the zebra runs and that helps your metabolism to basically dissipates stress and that’s
exactly what it was designed. But the truth is that the cycles of a stress and this
stress and stress and this stress which are very normal in modern times because of the
frequency of the stressors, they are becoming actually
a problem because of this repetitive stress get us to regenerate our outer immune system
and other types of problems. So at the end of the day, stress is something that needs to be
managed, not eliminated. It has to be managed.
It’s part of life, it’s part of support. But I have a question,
another question for you. So we have the zebra that
runs and doesn’t get ulcers, but what the zebra need to be able to actually do and be efficient
with the use of stress?>>Space to run.>>A prairie. Without a prairie, that poor zebra might actually die of a heart attack if the lion comes
and there’s nowhere to run. So stress management is a wild works because there’s
the space, because you move, because that’s what we
do when we’re stressed, we get tensed, our muscles get jacked-up to defend
with the fight or flight. But look at our modern wild, at 93 percent of our times, we spend in these environments. So where do we run? How do we move? How do we manage our
stress in these things? So then you will have a
philosophical conversation, “Oh, let’s get out and
let’s do exercise.” I’m like, “Sure, fantastic.” But you spend 93 percent
of your time here. So you’re basically designing
for the seven percent. “Oh, we want to
convert these people.” Okay if you’re super successful, so you convert 10 percent
of those 93 percent. Still, you have 80 percent people
who basically do this every day. So why don’t we redesign this spaces? To do a stress
management in the wild. This is the new wild, this is our wild. People would say,
“No, no, that’s not. No, we have to get out of offices. No.” Just accept it. Suck it up. Offices are
the new wild for humans. Simple. Let’s redesign them to make them viable to
do a stress management. The problem is that right now, stress management is
really very lame, like there’s a lot of stress
related visits to primary care, and only very few gets some support from primary
care, like almost nothing. They don’t know how
to deal with this. They send you with an
Advil for your headache. If you say, well, then the mental health well
has to be taken care of. So mental health we say one-quarter
of population has issue, the truth is that about
10 percent of them gets refer to a some kind of like
supporting institution. 20 percent of those get
diagnosed with something, one-third of those
complete treatment, and one-third of
those do not relapse. So you compute this
beautiful like cascade, you had a tremendously
and efficient system for mental health,
doesn’t work either. We’ll say, “Okay. No,
no, no. wait a minute. The zebra doesn’t go to the
doctor is your problem, so why don’t you deal with that, and put it back on the user?” So what do the user says? So at the end of the day, what do you think are the two
major reasons for lack of good stress management
in the new wild?>>Time? We have time.>>Time. Yeah, lack of
willpower and lack of time. Yeah. These are like large cohort
of a 1,000 people that APA did. Yeah. So we’re also very bad
at managing, there’s no time. So we need support, we need support. So for me, this mundane
stress management is like, I’m going to take the
bull by the horns, and I’m going to design for the 93 percent of the time that
you spend in these places. Offices, cars, and homes. I’m going to try to see a way if I can make a chair into
something that works for a stress or a table or a lamp, or something that’s already there, and then learn from each other. So let’s start with these
two environments first. The office and their
home, to tell you some of the things that we’ve
been thinking and doing, and then I’ll tell you
more about the car. So this was the first study
that inspired some of the work. We actually published this
paper a few years ago. Can we sense stress test by
the way you move the mouse? This is a five dollar mouse
from Walmart, no sensors added. Just look at the way
you move x and y, and then try to create a
biomechanical model of the arm, which gets affected by
muscle tension because that’s the normal reaction, and prove if the damping frequency of a mass-spring-damper model of that
thing will actually increase, because it’s correlated with a k-coefficient of
your muscle tension. We proved that and we showed
that it is possible to do that. What we did is we created a
bunch of tasks point and click, or drag and drop, or steering, and we replicated essentially
the Fitt’s Law experiment. Then we showed that for
every combination of distance and width for
different targets, we were able to actually find a difference in a
stress and no stress. This is the mean without
necessarily their bars. You’re actually able with
about 10 random samples and a simple machine learning classifier to get already 70
percent accuracy with a very simple tool which
is Damping Frequency. The way that we did it is using a tool that we borrow
from speech processing, the linear predictive
coding in order to do the generation of an old pole model of your arm and try
to establish what is the major component of
oscillation of your arm. That’s how we created this thing. At the end of the day,
this is very simple. It’s just software, it’s
really not heavy in any way, and it can be implemented in about four billion mouse that are
already deployed in the world, like PC mouse, five
dollar from Walmart. We also recently moved and did studies a replicated
sense study for touchpad. Now, touchpad is a very
interesting problem, it’s way more complicated than mouse. It has like extremely complication in the way that people
use with two fingers, with two hands in different ways. Also, the hand, and the wrist, and the fingers are way more
complicated than an arm, and it requires a more analysis. But we were happy to find
that for all the conditions, all the combinations of
distances and widths, we found actually a
difference in one of the features which was the
area under the finger. So the amount of like, let’s say as a proxy for pressure, the area under the finger
seems to be clearly correlated with the amount of
stress that you have for touchpads. This paper’s in submission right now. These are the values actually. So we got people with stress, with a stressor like doing math, and then we measure
the level of stress, and then we basically show that
this area under the finger worked. Right now, we’re actually working, we’re very proud we got approval
to implement our technology both the mouse and the
trackpad in a clinic, and the Hoover Pavilion is one
of the family medicine clinics. So right now, burnout is a huge
problem in the medical domain. Burnout is a prevalence of
higher than 50 percent. I’m not going to spend a little time. But essentially, burnout is a combination of stress
and dehumanization, and dehuminization happens because of the cycle of production for
new patient to be treated. So doctors don’t have the time to develop an empathetic interaction, and that dehumanization,
let’s call it that way, generates a lot of emotional
traits and stress. So we’re going to be
using the mouse and also the trackpad in all the
computers that practitioners are doing and where this is going to be longitudinal in these
high-risk population. Now, one of the interventions that we’re actually testing
right now to try to modulate stress is how do
we enable movement, right? So movement is, that’s
where the zebra does. One way of looking at movement
is the past 50 years of studies in sedentarism
having focused in what’s called moderate to active movement. But now, more recently, there’s a new trend to try to
look at what’s called the low intensity physical
activity, the LIPA. That means, even like
changing posture or moving a little bit is
predictive of your mortality. So what we implemented was this desk that we call
the haunted desk, that basically is a thing
that moves on its own. The reason we created
these robotic desk, which I know it looks silly, we investigated that
70 percent of users who start doing this desk remain
seating after three months. So after three months, nobody uses the seat stand desk
anymore, just sit in there. Then you ask them why,
and it turns out that about 70 percent say like, “Forgot.” Apathy. I’m like,” Okay. So if you don’t care, then
I’m going to start moving it for you and see what happens.” So we basically did our early analysis of what
is the reaction that you have to a manual versus an autonomic desk with either
small cohort of 20 people, and we probed to see how they felt
about their stress and thing, and we found no differences
between the two. They do become very interesting. Very interestingly, they become very opinionated at the end when you say, “Give me a reason between the two.” They will tell you very
strong reasons about, “I don’t want to be
removed the control. How does a desk there
to tell me what to do? I want to have control.” But when they are doing the thing, you will see people like literally
seating and then going up, and they didn’t even pay
attention to the desk. So we’re going to test
this concept of what we call non-volitional behavior change. We’re going to force you to move. No persuasion anymore. We’re going to push in the limit, like non-volitional to see
we can get you moving, and we want to do it longitudinal. We’re going to start a
longitudinal study now in January, and we’ll report back. We hope that we can train a neural network that’s very
adapted, it’s very good. That’s actually in our brain, these neural network to
basically get you used to go up and down without
like complaining anymore. As opposed to training a
super advanced machine learning thing to find the
perfect time to move the desk. There’s some papers I have
already investigated these. It’s a very complicated problem. You want to make it perfect,
it just doesn’t work. So we’re going to go and in your face just move it and see
if humans get used to that.>>So that’s one of the movement
things that we’re trying. The other one that we’re
trying is can we actually regulate breathing in
a very peripheral way. It should not be called subliminal,
it should be peripheral. Peripheral way —
we’re developing this. Basically, it’s a plug-in that activates a breathing
edge around your browser. That’s linked to your
breathing rate and as you start getting stressed, it tries to bring it
down to the same level, and it’s very peripheral.
You keep working. You barely notice
the thing and we are already have collected
some preliminary data. There are actually people do keep a lower breathing rate changed
by the exposure to this. It’s very soothing frequency
of the peripheral vision. We’re actually right
now investigating what’s the right application because if you study
the visual cortex. The peripheral vision actually
has a stronger connection through your parasympathetic
nervous system as opposed to the foveal vision. The peripheral vision has
a stronger connection. So there’s a huge potentially. We can activate the
periphery stimulus. You might actually
have a great chance. So we think that this will be even stronger if you have
bigger monitors where you really activate your
peripheral vision as opposed to smaller ones or
maybe in the car as well, maybe in the thing. We need to work with your rods, the rods not with the cones, right? The cones are the ones that basically
get you alerted and aroused. The rods can help you calm down. The other thing that we have been doing is these intervention suites. This is actually the work that
we worked with Mary here. The path therapy, right? How do we do these
micro interventions that was the earliest
curation of this thing. Can we get on a stressor and use some kind of
contextual information to provide a recommendation
of very mundane things? I remember we went on and say, “We’re going to pick
the most mundane things but the ones that
are popular though.” So we went on and look at the very popular things
that we use online, that you already know how to use. So I’m not going to
teach you anything new. I’m just going to get you going to do exactly the same
things that you do. How do we recommend with
the machine learning tool? What is the best thing
to do at certain point? We created, and sorry for the
busyness, this is part of our paper. We created a suite of interventions that basically
were essentially two parts; a prompt and a link. We say like, “Go do
this in Facebook.” For example one of them was like, “Go and look at your Timeline and just look at a good
part of your life.” That’s actually highly correlated
with the three good things technique that you see
in positive psychology. So we found actually, we work with our clinical psychology to find an intersection between the popular mundane apps with
popular techniques in psychology. It’s amazing actually. Many
of these things I would say, people are already with the purpose
of stress management, right? Without them necessarily
calling them that way. We use a bunch of things from
the phone in order to train our contextual bandit and try to basically deliver these
things over time. We ran for weeks, with a bunch of samplings that
we did daily, weekly etc, and we did this construct
of two-by-two between random and machine
learning recommendation and also the possibility
to select or not select. We eventually saw that there was
a lot of personalization that happen across all these
different interventions with all the different participants. We started with 80 or something. We ended up with 20, 70, right? Although I’m not remembering. So the dropout was equivalent
to what you see in other apps but we saw
there was personalization, and if you see an aggregate of these, we basically, the random
recommendation was this versus this is why
we ended up at week four after training several
times a contextual bandit. The funny part is that when you
ask them separately, to people, which ones they prefer, they
were highly correlated with the ones that the algorithm
picked for themselves too. One thing that we
observed that was very important was the novelty effect. This graph’s a bit busy but also
pay attention to the scale. Turns out that people
started testing, for those who were in
the selection section, they were testing new apps. They wanted to explore
instead of exploring. After a while, they started
becoming a bit more like used to what they receive
from the algorithm. But after two weeks, they started, again,
exploring, exploring. I’m like, what happened?
We supposedly, already identify the best
interventions for you. But now you are not happy. Why not? Then we did interview a couple
and they say, I’m bored. Yeah, okay. Breathing is good for me. Moving, doing. Yes, sure. What else is there out? That’s a huge challenge I would
say for in the [inaudible] design. The boredom and novelty is really a complicated problem that
needs to be attacked. I think it can be done systemically
with algorithms as well. The other thing that we found very
interesting was this paradox. Many people who dropped
early in the study dropped because they said
your app stresses me out. I’m like, no way this is
a stress management app. It’s supposed to help you
not to stress you out. They say, well, the
thing is that once I installed that thing in my phone, I keep being reminded that
I had the stupid app and reminds me of my stress and how
stressed I am and I’m like. So awareness for stress
management is not trivial. Everybody thinks awareness is the beginning of all
held interventions. I disagree. For stress, you have to be very careful. Awareness can actually be
an exacerbation factor. You have to be very careful
how you design these things. So one of the extensions of the work that we did with
Mary is this idea of popbots. This where we got funding
from the Human- center AI Movement in Stanford. So we’re actually doing a similar
idea but with micro chatbots. What we’re trying to
do is basically create a family of chatbots
because those couldn’t understand the world of childhood
development for mental health. It’s been very hard. Eliza, the first mental
health childhood, what’s created in the 50s, what happened 70 years and
we haven’t cracked the nut? It’s not a trivial problem to do
because it’s very hard to keep track of all the complexity of life and mental health,
basically about life. So what we did is we proposed
together with Dan Jurafsky, who’s my co-researcher, a
professor in linguistics. He’s like, let’s create
a bunch of apps, a bunch of micro chatbots
and then let’s create a recommendation layer that
will recommend the chatbot that works for you for the different
contexts and see we can find a posse of chatbots that will work for you and discard
those who don’t work for you. So in that case, you can
actually do some trimming. We basically started with a Wizard of Oz to validate if
this idea of having multiple channel works
and we basically had variable several
chatbots versus control, which was one single chatbot. People found that, sorry, this should have been
the other way around. People found that having multiple
chatbots is actually a bit more useful than having
a single chatbot. That’s the very beginning
when we started with a Wizard of Oz simulation. But then ever since, we actually deployed the app that we created in Telegram and we basically this is where we applied most of
the money in creating these app. Basically, delivers these bots and we have been testing
it with students, with non-students, and now we’re
going to do a larger cohort. We’ve observed that most of the stressors that
people report when they talk to the bot are about work
in school and productivity, and some with relationships etc. So we need to learn to
identify the things. The other thing that
we observed is that by asking them was it
helpful or neutral, we are right now like in a complicated point
because there’s people who find the bots
useful but there’s also people who didn’t
find the bots useful. But we started investigating. We realized that there’s
very important problems here that needs to be attended especially from the NLP perspective. One of them is parsing
of the stressor. When you ask, when the bot ask, how are you doing?
What’s your stressful? Somebody says, I’m stressed because I’m running
late and I might get fired or maybe get
punished or something. What is the stressor there? Is it that you’re late or you might get late or a
compounded of those two? If you understand how
to parse and basically talk back the user exactly
what the stressor is, the rapport builds up immediately. The bot understands me. If you just say, Oh
sorry, you are stressed. It’s very rough, the interaction. So parsing a stressor from
an open-text question. It’s not trivial because the knowledge domain
of stressors is huge. So we’re working on
trying to construct a very large database
to try to start parsing the stressors in a much
more efficient way from an NLP perspective. The other one is we have
to answer this question. Can I provide support? Sometimes people want to use this chatbots even though they are designed for daily stress management, for things like my son is suicidal and wants to like
– very complicated problems. Now, those level of attention
of treatment cannot be dealt with like a
simple chatbot that only gives you simple therapeutic advice. Those should be escalated. So we need to determine in terms of severity and complexity,
can we support? Basically, escalate
those where we can. The other one that’s very important. We cannot just launch the system in a random way like we
launched the pop therapy. In pop therapy, we
launched literally with a random assignment like traditional contextual bandit
and people kept trying. Here, they create a
very strong connection to the very first conversation. So if the very first
conversation with the user, the very first bot that
they test is not great, that’s pretty much the
end of the engagement. So you cannot just start
with a random baseline. So we’re creating a free
train recommendation based for this problem.>>The other thing that we did with [inaudible] I think is just word quickly sharing is we did
some qualitative analysis, and what we did is this
card sorting technique, where we asked them
to take some cards that look like with
a stressors in them, and try to ask them to put
them in different boxes, that had the different bots. So let’s say, for example, what kind of bot do you want for
financial problem? So I want the problem-solving one. What do you want from
the relationship one? I want the humor one. We asked them to distribute them, and to talk about that, and that was all very interesting. But then the part that’s more interesting about the
whole thing is that, we then added humans, and we say okay, let’s say
you have humans and bots, and you have humans always available. So forget about availability. Redistribute the cards after
they have done the bot. They didn’t put everyone
in the human side, they left some on bot side. Why are you leaving
bots? Aren’t humans supposed to be better to
provide support and all that? “Yeah, but no, sometimes
they are not better. Sometimes they take too
long to give you advice.” Sometimes they are,
I have to engage and go through all the conversation chats to tell them what’s going on. Sometimes I don’t trust them. They want bots in their lives. People want these bots as part
of an ecosystem of support. So that’s why we’re trying to
work with these ecosystem. The delays part is fascinating. We also tested, when we were doing
this qualitative assessment, they told us, “We want
the bot to have a delay.” I said, “Why do you need a delay?” Well, the problem is I sent my stressor and immediately
response back to me. There’s other people who
have seen this before. So like, what is a bot? It’s a ready process
in two milliseconds. No, it doesn’t matter. I know, but still I need the
bot to feel human. So that need to have delays and feel human is a very interesting one. Some of them wanted
to use multiple bots. They say, I don’t want
just one, but I want to start first calming myself down, and then I go into problem solving, and then maybe I do some
breathing regulation. The humor bot was a
very interesting one. This was love and hate relationship. Some people love the humor one, it was like a simple
joke type of thing. So people hated it with
their whole heart. Seems like, “How do you dare to try to make me laugh when I’m stressed? Don’t do that ever in your life.” Now, we’re testing also
this informal language. We’re testing why did
we actually add LOLs, and little faces, and like sup, and instead of, “How are you doing? What’s up man?” type of thing. Again, very devising, younger people
find it cute and interesting. Older people, oh my God, there was one person who
got very upset with LOL. Like, “How is this
bot saying LOL at me? I’m stressed. Stupid bot.” So these little things we’re investigating were
linguistics and try to design an interesting platform that
combines the HCI design with NLP problems and tried to
deliver this microbots. We’re also investigating
the voice part, but in the voice part, voice version of these bots, we found a very interesting problem. Maybe there’s a solution like I haven’t investigated deeper enough. So we did it actually with
a robot format in the car. These pauses that people
make when they are close. “So how are you feeling?” “Well, um, you know, it’s been hard. You know,
I have all this problem.” All that searching for the
reasons on how to spread your emotions that are very
natural for humans to make pauses. You break this voice
interfaces immediately. Try to express your feelings to
Alexa or to any other thing, and tell me or Siri, or anybody. Tell me they can
actually understand that you are not done talking. Very complicated this,
and we try to do phonetics and proxemics
and other things. English is a very complicated one. Seems like Japanese is more doable. Japanese seems to have like some prosody inflection that will
tell you that you’re not done, or done, but English,
it’s all over the place. So that’s one of the things that
we will be working in the future. Can we actually also find a solution? Well, with a lot of data, we might be able to train
a deep neural networks, but we don’t have a lot of data
for stress management in the web. The last thing that we’re doing
right now, chances are mostly, the future of what we’re
doing is we’re actually combining stress and
productivity as well. So one of the challenges
with healthy people, as I said, is stress
is not an [inaudible]. Why do I have to manage my stress? For what? One of the key
challenges is productivity. So we’re really
bringing together some of the knowledge that we
have created with sensors, and some of the knowledge that Michael Bernstein and his team
has created with Habitlab. This is a new tool that they created to do procrastination management, only exclusively
procrastination management. But procrastination is a
very emotional problem. They haven’t touched the affective
component of procrastination. They only basically give
you tools that basically block Facebook or delay Facebook. Some of these times actually this
became an integrations work, but the thing is that
procrastination is emotional. If you think about
it, procrastination is really almost non-logical. Why would you procrastinate
if it’s going to hurt you, hurt yourself? So it’s very emotional, the psychology behind it. So we are combining the Habitlab learning with the PopTherapy
stress management and Popbots in trying to create an interesting
probe of stress and productivity. Habitlab has already 12,000 users and we’re working
with these people, and they have 50 new users every day being recorded for
this plugin that they created. All right. So that was the office. Let me tell you a little
bit about the car. Why the car though? Actually, it’s not about the car. I’m going to skip this
one. We already went this. It’s not the car; it’s the commute. The commutes are very
interesting problem. So remember, there’s no time
to deal with the stress. Well, that’s because you’re
running and working. The commute is one of those
times when people actually don’t really have a very strong
opinion of their time use. It’s like, “Will you do a breathing
exercise during a commute?” “Oh, yeah, probably.” “Yeah. Would you do something?” “Yeah, I’m not working. I’m not either at home,
I’m transitioning.” Well, the commutes are very
interesting thing because in the US, people spend in cars 60 minutes
every single day on average. This is 128 million people that
jump in a car every single day, 87 percent of the labor force
commute by car in this country. This is 128 million
times two hours per day. So this is a lot of opportunity, 140 million of those commute
alone by themselves in a car.>>Can we do something to
transform the commute, which is for many a bug
in life into a feature? Again, I’m like taking
the bull by the horns, I’m like commute yeah. Well let’s do something
about it right? How can we use a commute to break the exacerbation chain
between office and home. To do work life balance
and basically you create the opportunities
that we want to do that. So again doing the sensorless
sensing and subtle interventions. We actually did the same study that we need for the
mouse for in the car. Also we’ve been dealing with like
breathing regulation in the car. So the sensorless sensing part is essentially an extension
of what we said before. We can use these beautiful
sensors for movement in the car. There’s movement for the upper
limbs and for the lower limbs. They are already being
encoded pretty much the drive by wire cards
modern direct work. They all have encoders are ready. You can actually go and buy an
OBD an On-board Diagnostics USB from online and get data
out of your car in no time. So how do we re-purpose this car and the reason is that we’re
also doing the same type of approach that we had
before in the car instead of looking at the autonomic
nervous system analysis. We’re looking at the somatic
nervous system analysis by looking at how you move things. That’s how we basically
approach this problem. This is a traditional
psychophysiology that you can get in a car as well. This is what we think
we cannot value. Looking at movement from your neck, your shoulder, your legs, your arms, and converted into
non-obtrusive stress sensor. So what we did, is we
basically started with a simulator study because you
never know the effects of stress. So we got people driving and we monitor essentially
you’re turns right. Anything that’s positive
is a right turning. Anything that’s negative is
a left turn and we got them. We basically wanted to do
the mass spring damper model again but this time
with angular movement. We put people in both
stress and calm condition . The stress condition is
a bit peculiar though, because in the mouse you can get like a 100 clicks in about few minutes. You can now get a 100 turns
in a few minutes in a car. So you have to stress them up. But then you have to
sustain the stress. So what we did is we
did applied math. Math the same that we did
in [inaudible] and then we apply heavy metal during the drive. This is actually a former. Yeah there’s actually a very interesting journal
psychophysiology paper that shows that heavy metal is actually pretty stressful
for 90 percent of the population and very calming
for another 10 percent. So the experiment was
like you know 25 people. You do the baseline with a
video then you calm them down. Then you do the driving thing
and counterbalance everything. As I said you know we
have math but also heavy metal to keep the stress
high while you’re driving. There were a couple of old ladies. We almost killed them
[inaudible] and they said this is the worst
experience of my life. I have never had something like that. Again, damping free is the damper
infrequency going to change. We use this advanced simulator for driving that basically
had high-frequency, high sampling rate to try to
get a more precise reading. We on purposely didn’t put any type of traffic or any
type of other stimulus because otherwise you will these
stimulating other complaints of the somatic nervous system. So simply driving learning cars. What we did is we basically
did a simple analysis. We took the terms, we got the
absolute values then we found the peaks and look
at the monotonically increasing segments of turn. Then we basically picked the
first part of each segment, the who was in a turn because that’s the first part when the intention
occurs with your muscles. The first few degrees. After that there is other
forces that influence the movement including the
movement of the wheel itself. So we wanted to keep the very
first part of the movement, which has a stronger a
muscular activation. Then we apply the same technique this fourth order Linear
Predictive Coding, which is a speech technique. Speech processing technique to predict the second order
mass spring damper. We look at basically what’s
called the under-damped poles, which is this stable the inner
poles of the system and we were able to predict the
damping frequency correctly. We validated that stress, the people got very stressed. We measure also the
pleasure our element. As you can see people really hated the heavy metal like really badly. Here there’s how we
basically proved that damping frequency also works and we published this paper
in [inaudible] year, this is the fast and furious paper. That’s the title of the paper. The beautiful thing
that we also founds that we didn’t need 10
minutes of driving. Which was more or less what
we did to get like a law. With about eight turns. We were able to also
see a radio signal. With the first eight turns. So that means that you
know even when you’re like just moving out
of a parking lot and getting into a highway
where it’s more stable you might still get be
able to detect something. We also simulated, we decimated the signal in order to get
a lower sampling frequency because the OBD that you
get the signal is not sampled at 1000 hertz like
the simulator that we did. It’s usually are
around 20 to 50 hertz. So we wanted to see
if it works with like decimation and work well as well. We test replicated these
in the wild and we’re like should be published in
inward soon and you become. We did it in the garage or
under the office where we work in the [inaudible]
building where [inaudible]. So we moved there, we took a very has nothing to
do with [inaudible]. It says the left over
of that building. So we use these because
since it was abandoned, well abandon its remodel. This was the first lab of the med school in that
building and we’re very proud to say where we have the first lab which is
a garage with a car. We prove that it actually
works also very well in the wild and we’re like submitting
these my postdoc did it. I’m going to skip this one. So that’s essentially the
sensorless sensing part. Now on the subtle interventions part. Time how much, Five?>>Yeah.>>So in the study interventions
we started testing like will people actually
do movement in the car. As you can see I’m
obsessed with movement. We saw that can we guide them with like you know a
chair that has a bunch of like models to guide you to
do things like guide you like swipe up and down like to see if you can bend or
or do twists and things. By activating these models, we wanted you to do like side stretch or head turns or other things. Eventually the one that really
worked was the breathing. They will do guided breathing
with these type of chair. Like the chair that basically tells you when to breathe in
and went to breathe out. They basically said
that’s what I want to do. Deep breathing and maybe other types of breathing likes high
breathing thing like. Tell me when to sigh
and things like that. So we went on that wasn’t exploratory study and we
basically did the deep breathing. I was going to ask you to take out 10 seconds deep breathing so you can feel the power of deep breathing. So we can do like 10
seconds deep breathing and see like how you feel. Let’s do it together, so
I can also take a pause.>>So see, deep breathing is
freely soothing, even 10 seconds. The problem is that it’s really soothing to the point
I can get to sleep. So you’re driving though,
and we don’t want that. So we actually wanted to test
it can be done while driving. So we basically got people
driving and breathing slowly, and we saw that we can go from 19 to 11 breaths per minute in about 40 seconds with
the guidance, which is not bad. It helps you reduce
your breathing rate. Chairs with that simple
guidance on your back. I was going to explain the reason I would deep breathing is because, it modulates basically
your oxygenation and also your respiratory
sinus arrhythmia, that’s how breathing
regulates stress. We focus mostly on going from normal to about 30
percent below normal. Now, really, the breathing is considered to be around six breaths per minute when you’re really good, and the benefits are fantastic, but you really have to be
very good to do that thing. Many people actually cannot
do very, very deep breathing. So we focus only the early part
which is the 30 percent reduction, and we tested several interactions, an accordion like interaction,
or a counter like interaction, or spiraling interaction
to see which one or just one counter that
basically tells you. Eventually, most people prefer
a very simple interaction, a swipe that goes up and
a swipe that goes down. Some like the other ones, but this metaphor was the best one, the one that we take,
we implemented that, and we basically got people
linked to your breathing rate, and we contrasted it with
a voice activated signal, the car you’re driving, the
car as you are breathe in, breathe out, breathe in,
breathe out, talks to you. We ran this study in the city
and also in the highway, and we tested the haptic
versus the voice. The reason we do these in car research is because
cities and highways are very different places to
test any of these things. Then you have typical baseline and
then the different conditions, and then you basically
counter balance this thing, I’m going to just keep
this design period. So this is the apparatus
where the person was monitoring a person,
and this is the result. So we had people that
reduced their breathing rate while they were exposed
to the guidance. But also after the guidance was
turned off, that was pretty cool. So you turn off the guidance, and they remain breathing slowly.>>Looks like the voice was
better than the haptic.>>Now, the voice was very heavy, also started a bit
higher the baseline. But it was better than haptic, but sustained the effect of the
haptic is a bit more flatted, was a bit more sustained
in the other one. So both of them worked, I wouldn’t say that I have to beat the other one. Both of them work. They both work and they both reduce also higher variability or
increase in this case, the RMSSD. I think the other
element is also to test. Sure, your breathing is low, but are you going to get into an accident. So we measure two elements
safety and performance, there’s a difference between
safety and performance in car. Safety means that can you die, performance you suck at driving. You can still suck at
driving and drive, but not be in a place like this. So we didn’t find any safety issues, and we found that people perceived the haptic to
be less distracting, and we prompt that a little
bit what do you prefer, they say, “I prefer haptic.” This is their preferences, because this less cognitively
demanding means that, “I don’t use my back for anything.” The other one I have to
be paying attention. It’s also easy to reengage,
this wasn’t fascinating. They basically say, “I’m driving, and I don’t want to breathe, so I just pull myself, I move forward, I don’t
need to feel that thing. Do my maneuver, and then I go back. Okay now, I can keep continue.” They preferred guidance over
[inaudible] prompt, a notch. They say, “No, I don’t want a notch. I want you to keep telling
me how to breathe. Because otherwise, I get confused and I keep driving
and I don’t do it.” They found many other
like subjective, lugging easy to use,
efficacious, etc. Yes, some people say, “Well, I prefer that you put these in a highway, not when I’m turning, or
when I’m in a red light, or something like that.”
That would be good. One side effect of the breathing was we tested also
autonomous driving. So we also had the
autonomous driving. Look at this guy, 40 seconds in
and he is sleepy, badly sleepy. So if you don’t have
Level 5 automation, don’t put these things. I was telling the
Mercedes-Benz people, “Now, are we going to put in [inaudible] , do you have Level 5
automation in your cars?” No as like, they don’t do it because you might kill
people with this thing, you will put them to
sleep very quickly. So we actually also tested, well, what happens if we
breathe faster then? Can we wake people up? We have a paper that
actually shows that we can wake people up by getting
you to breathe faster. So we’re working with
that truck company now to try to improve performance. We tested it in the same garage, this is a different cohort, different circuit, and we tested it, and we proved it, that
it actually works. I’m going to skip a little bit. But basically, we use a control, and the intervention
work really well, and this is for non-conscious
breathing rate, so it’s replication of the other one. But we also did stress intervention, so we stress them up, and then we verify
if after a stressor, can we reduce your breathing rate? It actually works even better
when that stressor is present. So I can bring your non-cautious
breathing rate low, I can really bring you down back
from a stressor in the car. So anyway, this is the summary
of what you can do in a car, they commutes a beautiful thing. So it’s not really about
the car, it’s the commute. There’s such a great
opportunity to do something in the mundane world to really do non-obtrusive
sensing and interventions. I have a bunch of projects
that you can see, I’m not going to go
through all of them. But they are all in the realm of design science and implementation
and machine learning, I have 6-8 people working
in my lab right now, we’re very excited of finding other collaborations with many
of you will be really nice. We want to keep repurposing things, we did something heavier
with Mary as well, can we actually add some sensors
and then make it better, these one were already
Microsoft products. What happened? Let’s find them again. You can do cool stuff with
capacitive sensing mouse, you can do cool stuff with
pressure sensing, very useful. I just told you about
these, the stream, the behavior change that
I’m very interested, one of the things that I
wanted to close with is, some of these ideas like behavior change are
very interesting to me. One of them is these
non-volitional behavior change, but the other one is can
I do it subliminally? Can I get your breathing rate
to be changed subliminally? So you sit in a chair,
you feel nothing, and your breathing rate is modulated, and we’re doing that with what’s called a low-frequency transducer, or also known as a ButtKicker. So we’re using this ButtKicker to generate these very low frequencies, that basically uses your
thorax as a resonance box, to try to generate changes in
your pattern of breathing, and we’re putting them in both
the car and also in the office, and we’re trying to use these
concept of entrainment, it’s a very powerful concept, entrainment means an
external stimulus induces an interoception
change in your body. You can do that with
light, with haptics, with tapping, with different
rhythmic patterns, and we’re also using these
other kinds of synchrony, try to do biomimetic, so the chair feels
like a living animal, it breathes like a living animal. So it’s these mammal connection. We’re investigating how to determine
the liminal level of haptics. So psycho haptics is less
studied than psychoacoustics. But it’s possible to find liminal levels of perception
and still generate. I’m actually pretty excited
to show preliminary data. This is subliminal condition we
were testing, like baseline, superliminal, subliminal,
control subliminal, less subliminal, let’s
call it and supraliminal, and the subliminal that
had the lowest 0.2, the lowest level, like
you feel nothing below your liminal level had the highest
effect on your breathing rate. These happen across four out
of five participants so far. So we’re very excited, we’re
creating now experiment. In this works, that
means I can create the chairs that will basically modulate your breathing rate
without you having to do anything, just sit on the thing, and this
one I already went through. So thank you so much
for your attention. I wanted to remind you that
I focus on these nudges, but there’s this other whole
world of micro interventions. But I think at the end of the day, we should all work in an ecosystem. It is an ecosystem of intervention, and try to create recommendation
systems to try to manage that, to have this high engagement
and high efficacy, the sweet spot that
we’re looking for. Across multiple scenarios and
across multiple types of people, people who like exploring. As you can see, I was
going to basically show you many of my projects
that are up here, but some of them are here. Don’t shy away from complexities, this is why I keep telling
my people in the Mexico, like you’ve been operating
on this side for 30 years, some people operate on that one, “Well, I operate in the car zone.” I just need to be fed
from both sides though, there’s science and the other
side to try to create what I call this precision held engineering combining design,
engineering, and medicine. That’s my talk, thank you
so much. I appreciate it.>>Any questions?>>I would love if you
can back up one slide, because I wanted to take a
photo, there’s so much on there.>>Yes, please.>>So for driving, I tend to get more
stressed as I’m driving, and the setup that you have is a stressor at the
beginning, how do you?>>So we just completed collecting. Okay, so that question of what is the natural expression of a stress
in a car without any stressor, is something that needs
to be documented. Healey and Picard did
something in the past, but they also did some simulation. We are trying to collect
very naturalistic data. So we just collected 16 people
driving for 45 minutes around. So Stanford has some city, but then there’s some mountains, and then there’s highway,
the two AD in the back. So we got them to drive to
three different scenarios, and where we bought
service bag, yeah, some people get stressed
as the drive continues. Some people actually get
calm as the drive continues. Some people find driving soothing. So that’s the very important
thing that needs to be observed. The baseline is not stable, right? So we need to control for that. We have not control for
that in these experiments. So now that we’re
basically generating the data to establish your baseline, we will have control
for that variation, and see maybe I don’t need to do
anything to just let you drive, or maybe I have to be very careful
how I manipulate their stress. If you’re already
stressed and top of that, put heavy metal and
I get you stressed, you might actually get to a point that you’ll be
completely overwhelmed, and not only from a perspective
of design interventions, also from a methodological
perspective because if we want to move to
people driving in cars, yeah, this thing they’re testing in a real car with a real person,
it’s very complicated. We provide insurance, we
provide all those things, but lives are super-important. So we’re right now establishing that baseline
with empirical information. Yeah, thanks for the point. Yeah.>>Just a follow-up, once you
have the baseline which you then, so I really found you’re like, I’m just going to raise
it [inaudible] with it, which you imagine once you
know what the baseline is.>>Yeah.>>Do we design reading out of
nowhere and hopefully that.>>That’s a very
complicated question. I mean, see that’s the thing. So I can measure things about your body like movement
and muscle tension, and there’s all these other wearables gave them an expert in
doing these things. We can measure a bunch of very
concrete things about your body, and then you have the
subjective component, right? Whether precise or not, it’s powerful and important
because it drives behavior like your mindsets, your beliefs, and your core
preferences, try behavior. So if I measure something
that’s going on say, your HRV is going up,
I need you to move, but you don’t feel subjectively, would the person move, will they get annoyed
with the thing, right? I think you know the study of trying to understand these
micro interventions, boils down to understanding this
complexity of the two sides, and trying to bring all together
in a more complex model, but I think there’s also the
need to understand that there’s this brother topics say,
mindsets, for example. Mindsets are very powerful things, and you have to understand. You could operate a
level of a mindset, and basically do nothing else, once you change the
mindset in a right way, you can actually get people moving. So if for example, I
might convince you that the desk is really
good for your health, and there’s data there. That shows that if you move
it every 30 minutes though, not if you just once, you have to move it every 30 minutes. So the annoyance at the beginning, basically becomes a way of building resilience and building a
mindset of an active person, and you eventually adopt the desk as a normal thing that
basically is helping you, versus if I don’t necessarily
use the mindset component. So I think I’m actually
discussing this with Alia Crum. She’s a researcher in
psychology who studies mindset. We’re trying to use
this desk problem. I said, “Alia, what’s the issue?” They don’t really mind that much behaviorally, but when you ask them, they really get upset about this desk because they take control out of you and
all these other things, a simple desk that goes up
and down and people just. Yeah, so I think we should operate at a lower level of
behavior plus subjective, let’s not leave it outside. I don’t think just
objective is sufficient, and also incorporate higher
level of understanding, say, mindsets and other things to
try to find the best solution. So we have a long career in front of us so solve
these problems. Yes.>>Okay, so there’s a bot
[inaudible] last week, which is a bot to
software engineering, and this idea of bots being liaisons to help the code review
came up, and quite often, and as in a bot would be able to connect
you with an average unit to help you do the review
[inaudible] software developers. I was curious in your chat bot, your micro chat bot inventions, where people interested or
participants interested in having, they were interested in
talking to you for some task, but maybe they’re interested
in the bot connecting them [inaudible] being able to
do that [inaudible] facet.>>I think that that combination of chat bots and humans is
really the most powerful one, and we’re really trying to understand further chat
bots a little bit more, but we haven’t done
things like hybrid, but our hope is that we
can build such a hybrid. Now we have an opportunity. We are going to be working in
rural India with the [inaudible] , which is this group of health workers that provides
support to other people, and they have already phones,
and they have other things, and what we’re hoping to
do is to basically create, let’s say a triaging
layer of chat bots. Just triaging means simple
stuff, quick take today, and then as soon as you
have some more complex need that needs to connect to some other site or maybe to a more expert, like these are the
liaison as you well said. So create such a
infrastructure if you must, and understand what’s the best way to traverse the hierarchy of support, where we believe that
chat bots will stay in the lower level of
triaging the entry level. Maybe there’s other chat
bots in the future with more NLP that actually
might be super experts too, and maybe we can send it to the
sweet side of prevention chat bot. So it’s not just between
chat bot and human, but also chat bot to chat bot, or human to chat bot perhaps,
even like therapist. As a matter of fact,
now you remind me the last interview we
did with the person, she was a patient actually, and she said, “I don’t want to chat with my
therapist all the time. Once a month is more than enough, but I would like to have the chat bots to help me
remind me of some things, and I would like that my therapist tells me talk to this chat bot.” So I think when I said ecosystem
is that, through ecosystem, I think chat bots are not
a replacement of humans nor are a automation to I think
they are their own species. I’m borrowing species on
a broad term of biology, but I think they are their
own linguistic species. So I think we could find in the
future, very interesting stub. Sometime I need a chat bot, sometimes a human, vice-versa. Yeah, but that’s a great observation.>>All right. Let’s thank Pablo.>>Thank you.

Leave comment

Your email address will not be published. Required fields are marked with *.