Profile cover photo
Profile photo
Nicole Wong
What would you do if you didn't have anything to do?
What would you do if you didn't have anything to do?

Nicole's posts

Teaching our kids to be leaders, contributors and global citizens can start with a lemonade stand or a bake sale.

Post has attachment
Hauntingly beautiful. Jason deCaires Taylor is an "eco-sculptor" who creates underwater living sculptures. His permanent installations are designed to act as artificial reefs, attracting corals, increasing marine biomass and aggregating fish species.

Check out the video of his work "Silent Evolution" in Cancun, including (about half way through) the process of the installation.

Post has attachment
I just love Beili Lu's Red Thread Legend series:

The ancient Chinese legend of the red thread tells that when children are born, invisible red threads connect them to the ones whom they are fated to be with. Over the years of their lives they come closer and eventually find each other, overcoming the distance between, and cultural and social divides.

The installation makes use of thousands of hand spiraled coils of red thread suspended from the ceiling of the gallery. Each disk is connected to another, as a “couple”, and each pair is made from a single thread. Every coil is pierced in the center by a sewing needle, which enables the suspension of the disks a few inches from the ground. Subtle air currents set the red disks swaying and turning slowly as the loose strands of thread on the floor drift and become entangled.

Post has attachment
Instead of preparing for class, I'm watching Belle & Sebastian in paper form. Sorry, if the lecture is a little disjointed tomorrow. Maybe it will have a nice backbeat.

After reading about the Yahoo! v. FaceBook patent suits, I keep thinking about the millions of dollars that will go to lawyers instead of innovation. Something's broken.

Post has attachment
Does the global Internet demand stronger international governance? Come and hear Director-General of UNESCO Irina Bokova speak on information access and freedom in the Digital Age.

Vital stats:
Tuesday, March 20, 2012, 4:30 pm - 6:00 pm
UC Berkeley School of Information, 210 South Hall

Be there or be square. Oh, wait....

Post has shared content
Wow. Just wow.
The Vatican has embraced technology and put together this amazing virtual tour of the Sistine Chapel. You can peer round and zoom in to every corner without having to battle tourists or leave the comfort of your home! There's even choir music to set the tone!

Post has attachment
SIMS 290-9, continued (the February edition)

Over the last few weeks of class, we have read some pretty esoteric material, including the International Covenant on Civil and Political Rights (again with the pithy titles), which commits 167 countries to protect certain fundamental human rights. We also read the Siracusa and Johannesburg Principles, which prescribe limits on such rights in specified circumstances. For example, one’s right to privacy might be curtailed when balanced against matters of national security or for the protection of public health, safety or morals (yes, the problematically expansive exception for “public morals”).

Unless you work for the UN Human Rights Committee or the International Court of Justice, these are not exactly documents kept at your fingertips. So, a fair question -- which at least one student has asked already -- is “why does this matter to anyone in the real world?”

Indeed, it isn’t even clear what these documents mean to the governments who claim to support them. While the parties to the ICCPR make high-level commitments to protect human rights, many of these states filed reservations or other declarations that limit their effect. For example, the US signed the ICCPR in 1977, but only ratified it in 1992 with limitations on its application to capital punishment, cruel and inhuman treatment, detention and other provisions that, on the whole, mean the commitments have little domestic effect.

Moreover, even if these parties were bound wholly to the ICCPR, there is at best weak enforcement by the international community. Most countries have not ratified the protocol that permits individuals to bring a complaint against a state. More than half of the parties (including the US) have failed to submit reports on the status of human rights in their countries, as required by the treaty. [Here’s a great link to track the status of the ICCPR:]

Perhaps the most damning fact about the ICCPR is its list of signatories. Among those countries that have pledged to respect fundamental human rights -- including the rights to freedom of expression and privacy -- are China (limiting its obligation to Hong Kong and Macao only), Colombia, Iran, Russia, Syria, Turkey and Vietnam. Really?

So, again, what exactly is the point?

For better or worse, the Universal Declaration of Human Rights and the ICCPR may be as close to an international consensus on fundamental rights as we ever attain. And in the uncertain terrain of the global Internet, we desperately need a common framework to continue forward.

As we saw in Tunisia and Egypt, in the Wikileaks scandal and the fight over RIM’s encrypted communications, a moment of crisis is the wrong time to try to make principled decisions about what people can say or what they can keep private. In those moments, every controversial act looks like it falls within an exception to the fundamental rights to free expression and privacy. In those moments, government overreach must be met by a public who are well-versed in their rights and prepared to hold those in power accountable.

However we judge the effectiveness of the ICCPR among nation states, the treaty and its related commentaries provide a common vocabulary and value framework for discussing human rights in information technologies. The rights at the heart of the networked world -- free expression and privacy -- are fundamental and should only be compromised in narrow circumstances. As technologists, policy makers and consumers, the ICCPR gives us a map for evaluating everything from hardware to cloud-based applications and a foundation for designing technologies for better outcomes.

For example, we know that Cisco’s mirror routers and Blue Coat’s filtering appliances are used by authoritarian governments to censor online content in places like China, Syria and Burma. These technologies may well have benign purposes, but the repressive uses are undisputed. If you are an engineer designing a technology that filters or surveils -- a technology that limits the availability of content or privacy by its very function -- then a human rights inquiry would suggest a design or a distribution plan that might minimize bad uses.

A similar -- but even more complex -- analysis can be applied to application and platform providers like YouTube, FaceBook and Twitter. There are values embedded in these free, open platforms -- free expression, access to information and association. From a human rights perspective, then, we can make design, policy and business decisions about why and how to limit them.

Neither the questions nor the decisions here are simple. What is Twitter’s obligation when a tweet violates the law in one country but a citizen’s free speech rights in another? On what basis does Facebook provide user data to government authorities around the world? Putting aside legal constraints for the moment, what are the ethical considerations for Google (or Target or Equifax for that matter) handling Big Data?

It would be naive and perhaps even misguided to suggest that large-scale consumer technologies should be built in service of human rights. But it is fair to ask companies to consider human rights in the products they launch to the world. It is good corporate responsibility and good business.

And, besides, who else is there to ask?

Post has attachment
This semester, Deirdre Mulligan and I kicked off IS290-9 at UC Berkeley’s School of Information, also known as “Internet Policy Challenges in a Global Environment.” (Okay, not exactly a pithy title.)

Since we are speaking OF the Internet, I thought I’d share some of my thoughts ON the Internet. Following class, I’ll post some additional thoughts on G+ -- things that occur to me in class or ideas that I don’t get around to mentioning. They may not be particularly eloquent, fully-baked or well-ordered ideas. It will be more like the outtakes of a movie, those scraps left on the cutting room floor.

One of the first week’s readings was the foundational article “Shaping the Web: Why the Politics of Search Engines Matters” (The Information Society, 2000) by Lucas Introna and +Helen Nissenbaum (

The article is ancient in Internet time -- already 12 years have passed! But their critical insight -- that technology design is both ethical and political -- remains as relevant as ever.

In the article, Introna and Nissenbaum show how “technological systems may embed or embody values” by examining search engines around in 2000 -- Yahoo!, Aliweb, Alta Vista, Lycos, and Hotbot -- as well as an embryonic idea for algorithmic search by then-graduate student +Larry Page. The authors warn of the systematic exclusion of web sites due to the limits of crawling and indexing the web, the bias against small, underfunded and low-trafficked sites, and the ranking algorithms based on popularity more than diversity. These systems, the authors feared, would result in a smaller universe of discoverable information and thus a narrowing of human experience.

In a first reading, I had a hard time getting through the article because I was fighting both the authors’ now dated documentation of how search technology works, as well as their predictions about how the Internet would evolve. But that was just the gift of hindsight. Studying the World Wide Web circa 2000, Introna and Nissenbaum were studying the manually “surfed” Yahoo! index and ranking algorithms heavily reliant on text matching. They worried about the limits of capacity and technology to include all the voices on the web and they worried that search engines had no incentive to push those boundaries.

I think Larry, +Amit Singhal, +Marissa Mayer, +Matt Cutts and others would be pleased to report that they worked hard to avoid many of the traps of search myopia identified by the authors. By 2008, Google estimated that there were one trillion unique URLs on the web and it surged ahead of its competitors by making the comprehensiveness of its index a priority. In the last decade, we have seen hundreds of large and small improvements to search technology that surface the most obscure information to the right person at the right time. Today, we can search across video, books, and in multiple languages. Far from lacking incentive, search engines have been chasing the economic value of comprehensiveness and the long tail of content which has come to fuel so much of the Internet.

That said, Introna and Nissenbaum were right to worry about users’ narrowing field of vision when they search for information. For years now, users have seldom looked past the first few pages of search results, much less compared results on different search engines (and there are a diminishing number of search engines to choose from these days). And as search becomes “social,” keyword-driven search seems almost quaint. Increasingly, people are finding information from their social networks via “Likes,” “reTweets,” “+’s” and sharing.

So, this is where the debate over the ethics and politics of information technology must shift. To revive Introna and Nissenbaum’s question: how does social search affect the nature of web users’ experiences? Will they find an open, comprehensive and diverse world of information? Or will it be an echo chamber of their own opinions and preferences, narrowed by a social network of like-minded friends? And, as a societal matter, do we care?

Consider a few concrete hypotheticals: what if Facebook decided to do a deal with MSNBC and only MSNBC news could be easily redistributed within the Facebook network? What if Google search favored showing Google+ and other Google properties over other sources? Or what if Twitter started autopopulating your followers based on the networks of other people with similar interests? These scenarios reflect choices -- business and technical choices to be sure, but as Introna and Nissenbaum foreshadowed in their article, ethical and political choices as well.

Assuming we care about how social networks will affect our access to information, now is the time to staking out design principles for these new platforms. Are these services sufficiently transparent in how they suggest (or limit) information? Should there be a mechanism for offering contrary opinions (think of the FCC’s Fairness Doctrine)? What role, if any, should government have in this marketplace?

Unlike the openness built into the architecture of the Internet, social networks start with a design that is closed. As such, it requires thought and effort to replicate the type of openness that supports the breadth and depth of ideas and opinions on the web. Of course, if there’s anything I’ve learned in the last 15 years of representing Internet companies, engineers can do anything. :)


Post has shared content
For reporters and citizen journalists covering "Occupy" demonstrations in NY, SF or wherever, there are 2 things you need to have with you at all times: First, a gas mask. Second, the Citizen's Guide to Reporting on Occupy Protests, available free on the First Amendment Coalition's smartphone app (also free).

The Guide was produced by the Citizen Media Law Project, an affiliate of Harvard Law School's Berkman Center. It's being distributed by the First Amendment Coalition.
Wait while more posts are being loaded