“Everything is in place for location-based social networking to be the next big thing. Tech companies are building the platforms, venture capitalists are providing the cash and marketers are eager to develop advertising.
All that is missing are the people.”
Posts in category 'Technology'
Once you scratch the surface, however, you soon realize what a terrible and impractical idea an app-enabled smartphone remote really is.
(And the author forgets to mention that TV’s are often watched in darker environments, where tactile feedback on the location of the volume and programme buttons are simply quite helpful – one can change the volume or the programme without having to watch a normal remote, but you can’t do that with a touchscreen smart phone).
“Some in the technology industry believe that a better alternative would be to simply replace the remote with smartphone apps like the one Mr. Lavoie uses. If you create a specialized smartphone app to control a TV or set-top box, you can pack the phone’s touch screen with virtual buttons in any configuration you like. [...]
[Other] companies are not sold on the idea of the smartphone as the remote of the future. They are selling a range of remotes armed with full keyboards, touch screens and motion sensors.”
What exactly does context-aware computing mean? According to Genevieve Bell, director of Intel’s Interaction and Experience Research Group, context-aware computing refers to “technologies that are able to determine how you feel, who you’re friends with and what your preferences are to better deliver personalized information.”
At a recent event in New York City, Intel showed off four research projects that represent possible future everyday uses of context-aware computing.
Cultural anthropologist, with degrees from Harvard and Stanford, Mimi Ito co-directed the Digital Youth Project, which was funded by the MacArthur Foundation and focused on new m-Learning scenarios. The project has become an important point of reference for those studying the relationship between teens and new media.
The three-year Digital Youth Project researched kids’ and teens’ informal learning through digital media, with a particular focus on the day-to-day use and the impact of these new technologies on learning, play and social interaction.
The results of the project are encapsulated in the report, Living and Learning with New Media: Summary of Findings from the Digital Youth Project, and the book Hanging Out, Messing Around, and Geeking Out: Kids Living and Learning with New Media.
Mimi explored a vast range of social activities that are “augmented” by digital technology: online gaming, virtual communities, production and consumptin of children’s software, and the relationship between children and new media.
She is also specialised in amateur content production and peer-to-peer learning.
She teaches at the Department of Informatics of the University of California, Irvine, and at Kejo University in Kanagawa, Japan. She has also worked for the Institute for Research and Learning, Xerox PARC, Tokyo University, the National Institute for Educational Research in Japan, and for Apple Computer.
Her new book on Otaku culture, the Japanese term for children that have an obsessive interest in video games and manga, will be published shortly.
Mimi Ito joined the Wiki Foundation Advisory Board in June of this year.
Watch video (Mimi starts speaking at 19:30)
“At the center of the broader societal debate is Boyd, whose views on key issues like online privacy are followed closely by tech companies and policy makers. An opponent of “regulation for its own sake,’’ as she puts it, Boyd, 32, has become a go-to source for companies (from Google on down), government agencies, and academics seeking insight into youthful behavior in a 24/7 digital universe.
She prides herself on diving deeply into what young people think and feel about their use of social media. With her tongue stud, bracelets, and neobohemian style of dressing, she fits in seamlessly with her target demographic, even while joking that they all “think I’m an old lady.’’”
Apart from the fact that this video provides great inspiration for interaction designers and interface designers of all sorts, and not just those working in journalism, it also inspires a wider reflection.
With people rapidly moving to a world inundated with data capturing devices and the resulting data streams, our challenge as UX designers is to create tools that make sense of these data, and transform this data flood into useful and actionable informational experiences that help us better conduct our lives.
Smart phone applicatins seem to me an intermediate step. Yes, indeed, one can find apps for almost any need and they are sometimes quite useful. But we cannot conduct our lives with hundreds of apps: one for parking, one for driving, one for shopping, one for dining, etcetera.
What could be the future of actionable data visualisations in a multi-sensorial world?
“Lady Greenfield of Oxford University has stepped up her campaign for an inquiry into “mind change” caused by computers and the internet. [...]
Lady Greenfield said the possible benefits of modern technology included a higher IQ, better memory and quicker processing of information. But she is more worried about the potential negative side. For example, social networking sites might reduce the empathy that young people felt towards others; using search engines to find facts might hinder the ability to learn; and computer games in which it was possible to start from the beginning, no matter how many mistakes were made, might make us more reckless in our day-to-day lives, she said.”
“According to NPD, a whopping 75 percent of all U.S. consumers did not connect to or download multimedia content, including games, music, video, or e-books, over the past three months. The majority of consumers who did search for and download such content–15 percent–did so mostly on their PC or Mac as opposed to other types of connected devices, such as video game consoles, mobile devices, or Blu-ray players.”
“Many children want to read books on digital devices and would read for fun more frequently if they could obtain e-books. But even if they had that access, two-thirds of them would not want to give up their traditional print books.
These are a few of the findings in a study being released on Wednesday by Scholastic, the American publisher of the Harry Potter books and the “Hunger Games” trilogy.
The report set out to explore the attitudes and behaviors of parents and children toward reading books for fun in a digital age. Scholastic surveyed more than 2,000 children ages 6 to 17, and their parents, in the spring.”
“Concepts like sharing and bartering — whether it’s fabric at Etsy Labs in Dumbo or powerboats at SailTime on the Chelsea Piers — are being revived and updated for the Twitter age.
“The groundswell of social technology today is creating unprecedented opportunities to share and collaborate,” said Rachel Bosman, an author of the new book “What’s Mine Is Yours: The Rise of Collaborative Consumption.” “Farmers’ markets and Facebook have a lot in common. All around us we’re seeing a renewed belief in the importance of community, in both the physical and virtual worlds.”
Despite the lingering hippie connotations, collectives, which might be described as self-managed groups of people with similar interests working toward a common goal, are a thoroughly modern phenomenon.”
“My talk today is about how I came into my research at Nokia wanting to answer the question: how can ethnographers contribute to the product design process of a mobile device? Ethnographically grounded research for technology use is a method that aims to reveal users’ values, beliefs, and ideas. Nokia was one of the first mobile companies to concertedly hire ethnographers as part of its design process, In the mid to late nineties, Nokia changed the mobile industry forever by creating affordable, user friendly phones. More than a decade later, the hardware mobile phone market is nearing saturation. With Nokia transitioning from a company that produces hardware to software, how can ethnographically driven research provide strategic insights for this shift?”
Poking around on Tricia’s site, I discovered some more inspiring and excellently written treasures to savour:
The Great Internet Freedom Bluff of Digital Imperialism: thoughts on cyber diplomacy, cargo cult digital activism… and Haystack
The Haystack Affair, like the recent Google-China Saga is just another technology that has been caught in the digital geo-politics of neo-informationalism. Neo-informationalism is the belief that information should function like currency in free-market capitalism—borderless, free from regulation, and mobile. The logic of this rests on an ethical framework that is tied to what Morgan Ames calls “information determinism,” the belief that free and open access to information can create real social change. [...] Neo-informationalist policies, such as the new “internet freedom” foreign policy to ensure free and flowing information, compliment neoliberal practices in corporate welfare to keep markets free and open to the US and all of our allies who benefit from our work. But it’s not free for all when it’s just free for some.
Check also these related posts:
- Evgeny Morozov: Were Haystack’s Iranian testers at risk?
Haystack is the Internet’s equivalent of the Bay of Pigs Invasion. It is the epitome of everything that is wrong with Washington’s push to promote Internet Freedom without thinking through the consequences and risks involved; thus, the more we learn about the Haystack Affair while it’s still fresh in everyone’s memory, the better.
- Sami Ben Gharbia: The Internet Freedom fallacy and Arab digital activism
This article focuses on grassroots digital activism in the Arab world and the risks of what seems to be an inevitable collusion with U.S foreign policy and interests. It sums up the most important elements of the conversation I have been having for the last 2 years with many actors involved in defending online free speech and the use of technology for social and political change. While the main focus is Arab digital activism, I have made sure to include similar concerns raised by activists and online free speech advocates from other parts of the world, such as China, Thailand, and Iran.
Three useful perspectives on technology, design, and social change (and countering the ICT4D hype)
As someone who researches the social side of technology, I am constantly trying to find new ways to talk to technologists that technology itself does not create social change, rather it’s how technology is socially embedded in a variety of institutions and cultural contexts. [...] Three resources have been very useful to me lately.
Rattner describes the future of context-aware computing
The real question, Rattner said, is: Is the market ready for all of this context? Intel Fellow Genevieve Bell (who also led the Day Zero events) arrived onstage to explain that all users have “ambivalent and complex” relationships with technology, and that discovering what people truly love is the key to making context-aware computing work. The process involves conceptualizing and designing potential products, validating that in the real world, integrating the changes, and repeating the process until the users are satisfied. This will involve, Bell said, talking more to users, but also helping them understand that context and life are not different contexts—watching a baseball game, seeing a road sign, or using multiple devices in a living room are all examples of context that can help devices learn more about you and what you need. Bell said, “If we get context right—even a little bit right—it propels an entirely new set of experiences.”
Wired.com > Gadget Lab
How context-aware computing will make gadgets smarter
Small always-on handheld devices equipped with low-power sensors could signal a new class of “context-aware” gadgets that are more like personal companions. Such devices would anticipate your moods, be aware of your feelings and make suggestions based on them, says Intel.
Researchers have been working for more than two decades on making computers be more in tune with their users. That means computers would sense and react to the environment around them. Done right, such devices would be so in sync with their owners that the former will feel like a natural extension of the latter.
Intel: Future smartphones will be assistants, companions [alternate link]
Rattner said that as devices begin to understand the way their users live their lives, they will turn into personal assistants. Within five years, smartphones will be aware of the information on a user’s laptop, desktop and tablet systems, and they will use that knowledge to help guide them through their daily activities.
Coming soon: mind-reading cell phones
Eventually, Intel might actually produce truly psychic cell phones. Earlier this summer, we learned about Intel’s Human Brain Project–a collaboration with Carnegie Mellon University and the University of Pittsburgh that uses EEG, fMRI, and magnetoencephalography to figure out what a subject is thinking about based entirely on their neural activity pattern. The technology won’t be ready for at least a decade–and that’s just fine with us.
And there is much more…
“The problem, as I see it, is that many small startups, and even some larger social media companies and efforts, lack user-centric and objective definitions of their goals and objectives. Companies are started to extend existing practices or applications, to take advantage of emerging market and social technology trends, and to explore opportunities in the marketplace. Those are either product or business-centric approaches, and they take user participation and interest for granted.
But the participation of users is precisely what will shape a company’s success. Social interaction design should be an essential step in vetting and defining product and service features. It can be relatively quick, and is not a full-time requirement. But insofar as it supplements the skills already covered by engineers, front-end designers, and business sense, it is a role that should not be overlooked.”
“The increasing rate of technological innovation and integration into the daily lives of many online users has spurred both the reconceptualization of the digital divide, and the promotion of user-centered research methods; that emphasize the significance of variability in technology acquisition. No longer is technology to be considered an external tool, where success depends on basic functionality, but an integrated player affected by the social relations and environment in which it resides. Through Niklas Luhmann’s systems theory in combination with actor-network theory, this study aims to look first at the systems in which nonprofits and web designers separately operate; how their processes are altered with the introduction of design ethnography and WordPress; and finally which human and nonhuman actors may be utilized in the creation of websites for nonprofits who desire technological self-sufficiency.”
A few blogs report on Bell’s contribution, but so far no video is online.
“Aside from asking the right questions, it’s also about learning through engagement and designing a set of experiences. Bell cited one of her latest coup in the last couple of years was that users are now as important to Intel as silicon. One of her biggest breakthroughs was the realization that she needed a roadmap that reflects what users needed instead of a simple processor update. However, she conceded that unless the intended experience of the silicon is very clear, it’s hard to make the right call throughout the entire process of conceptualizing and designing to testing in the homes and labs.”
“We’re marrying social science with engineering. Taking what we know about human beings,. We have a centre of excellence for understanding people, and one for engineering. The lab thinks about human IO, not just computer IO, and running the gamut with new forms of input method, being playful and provocative. Having engineers makes this happen In the next ten years, you will see some very different things from Intel,” said Genevieve Bell.
“Intel thinks the idea of understanding future user experiences is important enough that it has funded an entire arm of its research organization to this, known as “Interactions and Experiences Research.” Split into design and technology elements, and headed by Dr. Bell, the idea is to understand how users worldwide experience their technology, what they love about it, and what frustrates them.”
“Speaking on Day Zero of this year’s Intel Developer Forum in San Francisco, Bell suggested that Intel should begin to “think about experiences as a starting point for designing new technology”. Instead of working around a list of features, she explained that this would require Intel to understand the experiences people have with technology today. With such understanding, the company could focus on creating new technologies to better those existing “beloved” experiences and facilitate new ones.”
“Genevieve Bell, an anthropologist and Intel researcher spoke about how she is trying to get Intel to think simple instead of complex. She and her team travel the world watching how people use technology in public and at home.”
“You’ve likely never heard of him, but he has almost certainly had an impact on your life. A principal researcher with Microsoft Research who commutes from his home in Toronto to Redmond, Washington one week out of every month, he conceives and develops innovations in user interfaces. He played a chief role on the team that invented the multi-touch user interface. That was in 1984. He was also co-recipient of an Oscar for scientific and technical achievement in film in 2003. And he’s currently lending a hand developing an exciting consumer technology that he predicts will begin its march toward ubiquity in just three short years (no spoilers here—you’ll have to read on to discover what it is).
A conversation with Mr. Buxton is filled with fascinating digressions about the history of current technologies and how decades-old innovations can be the foundations of some of the most stimulating modern gadgets. The interview I had with him in July was arranged so that we could discuss Kinect, Microsoft’s new controller-less interface for the Xbox 360, but that ended up being just one part of our lengthy and enlightening discussion. That’s why I’ve decided to transcribe the bulk of the conversation. To do anything less would deprive readers of his captivating tales of technology.”
The first part deals with the back stories of several modern consumer devices, from touch screen phones to smart watches.
The second part focuses on Kinect, the motion-based, controller-less interface that will come to the Xbox 360 this November.
In the final part Buxton reflects on what the next big thing will be.
Why your world, work, and brain are being creatively disrupted
by Nick Bolton
Crown Business, Sept. 2010
The New York Times has published an article that was adapted from the book I Live in the Future & Here’s How It Works by Nick Bilton, the lead writer for The New York Times technology blog Bits. The book, to be published on Tuesday by Crown Business, examines the impact of technology on our lives.
“Now, we are always in the center of the map, and it’s a very powerful place to be.
When people want to know how the media business will deal with the Internet, the best way to begin to understand the sweeping changes is to recognize that the consumer of entertainment and information is now in the center. That center changes everything. It changes your concept of space, time and location. It changes your sense of community. It changes the way you view the information, news and data coming directly to you.
Now you are the starting point. Now the digital world follows you, not the other way around.”
Norman, Donald A
MIT Press, October 2010
If only today’s technology were simpler! It’s the universal lament, but it’s wrong. We don’t want simplicity. Simple tools are not up to the task. The world is complex; our tools need to match that complexity.
Simplicity turns out to be more complex than we thought. In this provocative and informative book, Don Norman writes that the complexity of our technology must mirror the complexity and richness of our lives. It’s not complexity that’s the problem, it’s bad design. Bad design complicates things unnecessarily and confuses us. Good design can tame complexity.
Norman gives us a crash course in the virtues of complexity. But even such simple things as salt and pepper shakers, doors, and light switches become complicated when we have to deal with many of them, each somewhat different. Managing complexity, says Norman, is a partnership. Designers have to produce things that tame complexity. But we too have to do our part: we have to take the time to learn the structure and practice the skills. This is how we mastered reading and writing, driving a car, and playing sports, and this is how we can master our complex tools.
Complexity is good. Simplicity is misleading. The good life is complex, rich, and rewarding—but only if it is understandable, sensible, and meaningful.
Business Week has named Don Norman as one of the world’s most influential designers. He has been both a professor and an executive: he was Vice President of Advanced Technology at Apple; his company, the Nielsen Norman Group, helps companies produce human-centered products and services; he has been on the faculty at Harvard, the University of California, San Diego, Northwestern University, and KAIST, in South Korea. He is the author of many books, including The Design of Everyday Things, The Invisible Computer (MIT Press, 1998), Emotional Design, and The Design of Future Things.
“Each time Facebook’s privacy settings change or a technology makes personal information available to new audiences, people scream foul. Each time, their cries seem to fall on deaf ears.
The reason for this disconnect is that in a computational world, privacy is often implemented through access control. Yet privacy is not simply about controlling access. It’s about understanding a social context, having a sense of how our information is passed around by others, and sharing accordingly. As social media mature, we must rethink how we encode privacy into our systems.”