Profile cover photo
Profile photo
Kaj Sotala
2,075 followers
2,075 followers
About
Kaj's interests
View all
Kaj's posts

> It’s curious how many writers tend to expect instant gratification. We’ve barely rolled a sheet of paper into the typewriter than we expect to see our efforts on the best seller list.

> It seems to me that other artists are rather less impatient of tangible success. What painter expects to sell the first canvas he covers? More often than not he plans to paint over it once it’s dried. What singer counts on being booked into Carnegie Hall the first day he hits a high note? Every other artistic career is assumed to have an extended and arduous period of study and apprenticeship, yet all too many writers think they ought to be able to write professionally on their first attempt, and mail off their first stories before the ink is dry.

> There must be reasons for this. I suppose the whole idea of communication is so intrinsic a part of what we do that a piece of writing which goes unread by others is like Bishop Berkeley’s tree falling where no human ear can hear it. If nobody reads it, it’s as if we hadn’t even written it.

> Then too, unpublished writing strikes us as unfinished writing. An artist can hang a canvas on his own wall. A singer can croon in the shower. A manuscript, though, is not complete until it is in print.

> At first glance this desire to receive money and recognition for early work would look like the height of egotistic arrogance. It seems to me, however, that what it best illustrates is the profound insecurity of the new writer. We yearn to be in print because without this recognition we have no way of establishing to our own satisfaction that our work is of any value.

-- Lawrence Block, Writing the Novel from Plot to Print to Pixel

> When I was fifteen or sixteen years old, and secure in the knowledge that I’d been placed on this planet to be a writer, it didn’t even occur to me to wonder what sort of thing I would write. I was at the time furiously busy reading my way through the great novels of the century, the works of Steinbeck and Hemingway and Wolfe and Dos Passos and Fitzgerald and all their friends and relations, and it was ever so clear to me that I would in due course produce a Great Novel of my own.

> I’d go to college first, naturally, where I might get a somewhat clearer notion of just what constituted a Great Novel. Then I’d emerge into the real world where I would Live. (I wasn’t quite certain what all this capital-L Living entailed, but I figured there would be a touch of squalor in there somewhere, along with generous dollops of booze and sex.) All of this Living would ultimately distill itself into the Meaningful Experiences out of which I would eventually produce any number of Worthwhile Books.

> Now there’s nothing necessarily wrong with this approach. Any number of important novels are produced in this approximate fashion, and the method has the added advantage that, should you wind up writing nothing at all, you’ll at least have treated yourself to plenty of booze and sex en route.

-- Lawrence Block, From Plot to Print to Pixel

Post has attachment
4X games (e.g. Civilization, Master of Orion, for this discussion I'm also counting Paradox-style grand strategy like Crusader Kings, Europa Universalis, Stellaris) have a well-known problem where, once you get sufficiently far ahead, you've basically already won and the game stops being very interesting. There have been various attempts to fix this, but the problem remains. (Extra Credits had a good discussion of why this is difficult to design around: https://www.youtube.com/watch?v=PJKTDz1zYzs )

One idea for fixing it could be to change the victory conditions as the game moves on.

Consider that the Civilization series has traditionally been very focused on warfare and conquest, even though they have introduced a lot more peaceful content in the recent versions. But it still tends to be profitable to take over your neighbors if you can.

Now a game that's very focused on conquest and warfare makes sense if it's a game set in the period of say Ancient Rome, where "will a more powerful ruler conquer my empire" was a very real threat and something that a ruler in those times would realistically have been concerned about. And this remained true for a long time.

But the closer that we get to the modern day, the less true this becomes. Being taken over by an invading army simply isn't something that the rulers of most countries would be concerned about, these days; the things that they do care about are entirely different. So for a nation-management game set in the modern day, unless it was also set in one of the few conflict-ridden areas that still exist today, warfare being a major focus simply wouldn't make much sense. Things like economic competition and maintaining the quality of life for your citizens are much more important.

So you might be able to have a design that emulated this. In the early parts of the game, you would want to focus more on conquest and growing bigger to avoid being taken over by your neighbors, as in the current games. But as technology developed, conquest would slowly cease to be a viable path, as entirely new game mechanics took its place. This would gradually eliminate the advantages that you got from your early-game success, but it shouldn't do it so fast as to make them meaningless. Rather it would give you a leg up as you raced to adapt to the changed conditions, and pivot your old advantages to match the changed conditions before they became just a burden. (In the end, if you didn't play your cards right, your big huge country would definitely still be around, but losing in score to those pesky little Nordic-analogue countries with small homogeneous populations that kept topping all the quality of life scores.)

I recall seeing some board and card games that simulated something like this: early-game, early-history advantages which give you a leg up but don't guarantee victory if you don't keep up in the tech race and changing conditions. Innovation comes to mind as one.

One interesting way of doing this in a computer game might be to take a page from the book Seeing Like A State and its discussion of legibility. Early nations were not very legible to their rulers: the rulers might have only had a very poor idea of how many people lived in the nation, or how much food different farmers might have been capable of producing (and thus how much you could tax them). The book argues that much of history has been a constant drive by rulers to increase the legibility of their realms, by implementing things like a census, standardizing measures, forcing people to adopt family names, and so on.

This ties nicely to what Extra Credits considers the two main problems in making strategy games interesting in the long term: accumulating bonuses, and the constant reduction in uncertainty. In a game like Civilization, you start with a lot of uncertainty, as the whole map is unexplored and you don't know anything about your local terrain or where your opponents are. As the game advances, there's less and less that you don't know and which you would have to address in your planning, and the game essentially becomes a puzzle where you can just figure out the best strategy and then execute it.

Now consider a game that implemented legibility as a game concept. What this might mean was that at first, the game would only model the things that you as a ruler roughly knew about. For example, you wouldn't know the exact crop yields produced across your empire, so each city would produce the same amount, simulating the fact that you only basically know the nation-wide average and couldn't do much to affect it.

When you developed technologies that increased the legibility of your empire, the game would randomly assign the different cities a number of variables (e.g. properties of local terrain) that collectively determined how much the different cities actually produced. This would then be revealed to you, letting you start managing them more closely, and thus making your realm more powerful. It would also force you to rethink possible existing plans for the locations of future cities, since the more detailed information about local terrain and its impact on crop yields would now become generally available. Thus what looked like an excellent city site might turn out to be a horrible one, and vice versa.

This would maintain a kind of strategic uncertainty throughout the game, as on each step that your realm got more legible, you would get a bunch of totally new information thrown at you that you had to adapt to and rework your plans around in order to stay competitive. And as there was more and more peaceful economic stuff that you as a ruler could start getting involved in, the military stuff would gradually start declining in relative importance.

+Kalle Viiri mentioned the Colonization games as a nice slightly different approach to this: given that the goal is to be the first colony to become independent, being the biggest isn't necessarily the thing that wins you the game. Rather, being big and powerful does benefit you in colony vs. colony wars, but a smaller colony is easier to defend when you declare independence and end up in a war against your home country. Similarly, the mechanics of how independence support works means that a big colony has a harder time getting to the point where you could actually declare independence. That's another take on a similar idea: that growing larger and bigger gets you advantages in some areas, but that doesn't win you the game by itself, and in some ways it's even actively harmful to winning the game.

Post has attachment
> Over the past two years, I’ve applied for some of the most prestigious academic positions in the world: for numerous scholarships including the Rhodes, Fulbright, and Marshall, as well as for Master’s and PhD positions at Harvard, MIT, Cambridge, and other top universities. [...] A large part of the application process has been working with applications reviewers, primarily from the university where I studied for my undergraduate degree. In total, I’ve worked with five essay reviewers, a dozen mock interview panellists, and the university’s scholarship advisor. [...]

> This essay is about my experience with the application process—specifically how I was repeatedly encouraged to alter my applications to conform with far-Left political ideology. These alterations would ostensibly bolster my chances of being accepted to and receiving funds for graduate programs. [...]

> When I would show [early] drafts to my writing fellows or scholarship advisors, the first question they would ask, almost unanimously, was but why do you care about extreme poverty? [...] What on earth did they mean? A number of them followed up by asking if I had witnessed anyone living in extreme poverty. No, I hadn’t. Had I or anyone I know ever contracted malaria or a neglected tropical disease? No. Did I feel I had a responsibility to the developing world as a beneficiary of colonialism? Not particularly. How did my privilege and my identity as a White Westerner contribute to my decision to focus on extreme poverty? It didn’t. [...]

> It quickly dawned on me that my advisors were, for the most part, largely incapable of understanding how a wealthy White American could possibly care about impoverished Black Africans, apart perhaps from White guilt or some deeper personal connection to poverty.

> In the world of the far-Left, the only sensible explanation for why one person would care about the suffering of another is that they personally identified with them on the basis of culture, ethnicity, race, or gender. Moral universalism has become inconceivable for many academics on the Left, who doubt that it’s possible to care about the suffering of another human being independently of your respective identities. Do I think this problem is exclusive to the political Left? No. But I do think it’s been exacerbated to phenomenal levels by identity politics.

Post has attachment
Here's a political position I have a lot of sympathies towards, even though I don't agree with all of it:

> Left-libertarianism in the relevant sense is a position that is simultaneously leftist and libertarian. It features leftist commitments to:

> * engaging in class analysis and class struggle;
> * opposing corporate privilege;
> * undermining structural poverty
> * embracing shared responsibility for challenging economic vulnerability;
> * affirming wealth redistribution;
> * supporting grass-roots empowerment;
> * humanizing worklife;
> * protecting civil liberties;
> * opposing the drug war;
> * supporting the rights of sex workers;
> * challenging police violence;
> * promoting environmental well-being and animal welfare;
> * fostering children’s liberation;
> * rejecting racism, sexism, heterosexism, nativism, and national chauvinism; and
> * resisting war, imperialism and colonialism.

> Simultaneously, it features libertarian commitments to:

> * affirming robust protections for just possessory claims;
> * embracing freed markets and a social ideal of peaceful, voluntary cooperation; and
> * crafting a thoroughly anti-statist politics.

Post has attachment
> Piaget's landmark studies indicated that kids don't grasp logic until they are approximately seven years old (Inhelder and Piaget 1958). And classic "theory of mind" experiments suggest that young children are poor psychologists. They act as if they don't understand the perspectives of other people (Wimmer and Perner 1983; Perner and Rossler 2012).

> But in recent decades, researchers have re-examined old assumptions and found reason for doubt. Kids, they say, may be confused by the experimental procedures. They may be puzzled by the unnatural wording of the test questions, or distracted by too many details. [...] [Misunderstandings about logic, “Theory of Mind" errors, and conservation errors can be eliminated by reformulating the question] [...]

> Do kids make logical errors because they can't turn off their "fast thinking" intuitions? Olivier Houdé and his colleagues have championed the idea, and it explains a lot. [...] ... kids really do know that elephants are bigger than rabbits, and they are capable of understanding that 5 coins don't become 6 coins merely because we move them around. But they have more trouble inhibiting the wrong answer. Their internal censor--the executive function that stops us from blurting out silly things--isn’t as powerful.

> That's an important developmental constraint on reasoning, but it doesn't mean kids are fundamentally irrational or illogical.

> Indeed, as Houdé and Gregoire Borst point out, young babies routinely pass tests of number conservation in the laboratory. They seem to understand that moving objects around can't change their number, and they don't have to inhibit the "length-equals-number" heuristic. They haven't learned it yet! We acquire intuitions and rules-of-thumb throughout our lives, and frequently have to chose between trusting these heuristics or taking a more effortful, careful approach to problem-solving. Adults, like children, can get it wrong, and we all benefit from learning to scrutinize the easy answers.

Post has attachment
> The 1901 Dorland’s Medical Dictionary defined heterosexuality as an “abnormal or perverted appetite toward the opposite sex.” More than two decades later, in 1923, Merriam Webster’s dictionary similarly defined it as “morbid sexual passion for one of the opposite sex.” It wasn’t until 1934 that heterosexuality was graced with the meaning we’re familiar with today: “manifestation of sexual passion for one of the opposite sex; normal sexuality.” [...]

> “Prior to 1868, there were no heterosexuals,” writes Blank. Neither were there homosexuals. It hadn’t yet occurred to humans that they might be “differentiated from one another by the kinds of love or sexual desire they experienced.” Sexual behaviours, of course, were identified and catalogued, and often times, forbidden. But the emphasis was always on the act, not the agent. [...]

> In the late 1860s, Hungarian journalist Karl Maria Kertbeny coined four terms to describe sexual experiences: heterosexual, homosexual, and two now forgotten terms to describe masturbation and bestiality; namely, monosexual and heterogenit. [...] The next time the word was published was in 1889, when Austro-German psychiatrist Richard von Krafft-Ebing included the word in Psychopathia Sexualis, a catalogue of sexual disorders. [...] For Krafft-Ebing, normal sexual desire was situated within a larger context of procreative utility, an idea that was in keeping with the dominant sexual theories of the West. In the Western world, long before sex acts were separated into the categories hetero/homo, there was a different ruling binary: procreative or non-procreative. The Bible, for instance, condemns homosexual intercourse for the same reason it condemns masturbation: because life-bearing seed is spilled in the act. [...]

> The importance of this shift – from reproductive instinct to erotic desire – can’t be overstated, as it’s crucial to modern notions of sexuality. When most people today think of heterosexuality, they might think of something like this: Billy understands from a very young age he is erotically attracted to girls. One day he focuses that erotic energy on Suzy, and he woos her. The pair fall in love, and give physical sexual expression to their erotic desire. And they live happily ever after.

> Without Krafft-Ebing’s work, this narrative might not have ever become thought of as “normal.” There is no mention, however implicit, of procreation. Defining normal sexual instinct according to erotic desire was a fundamental revolution in thinking about sex. Krafft-Ebing’s work laid the groundwork for the cultural shift that happened between the 1923 definition of heterosexuality as “morbid” and its 1934 definition as “normal.”

Post has attachment
Why foreign aid needs to become more evidence-based:

> ...most of these dramatic changes don’t correlate with is foreign aid. Aid has resulted in remarkably few significant shifts in economic growth and poverty reduction. The truth is much of aid’s promise has come up empty.

> It is striking that the aid establishment has not dug deeper into the reasons why. It has not listened to four decades of trenchant critiques, many of them by insiders. Countless articles and at least thirty widely read books about aid (such as Michael Maren’s 1997 The Road to Hell: The Ravaging Effects of Foreign Aid and International Charity, or William Easterly’s 2006 The White Man’s Burden: Why the West’s Efforts to Aid the Rest Have Done So Much Ill and So Little Good, or Dambisa Moyo’s 2009 Dead Aid), have pointed out that outsiders cannot “nation build,” that development must be led by the people in the poor countries themselves, that dependency has been one of the few tangible results of the trillions we have spent, that the complexity and the context-specific nature of each country’s politics, social structure, and culture cannot be easily understood by outsiders and thus the short term three to five year aid “project” is a wildly inappropriate vehicle for aid, and so on. [...]

> There are now thousands of ongoing projects that amount to band-aid solutions where the results of “our” interventions disappear almost immediately after the departure of our “expert” teams in their Land Cruisers: new water wells dug in villages where previous donor-built wells have failed; countless capacity-building workshops attended by poor people who are often motivated by the “sitting allowance”—a cash gift; tools given out to farmers who then sell them; projects that attempt to convert sex workers into sellers of samosas on the streets of Addis Ababa without realizing that the money they make in the sex trade is far greater than anything else they can do; microfinance projects in South Sudan where the economy is so bad that there is no money for anyone to buy what a “micro-entrepreneur” might have to sell. There are more ineffective projects like these than ever, all presented as world-changing in the aid agencies’ marketing campaigns (see the websites of USAID or of any major international NGO).

> The main reason there is so little change is that aid has become an industry, and is rapidly moving towards what a present day Eisenhower might call an “aid-industrial complex,” an interlocking set of players (NGOs, government agencies, and private contractors, among others) who have largely closed off outside criticism and internal learning and become self-referential and entrenched.

Post has attachment
> The previous post (Every attempt to manage academia makes it worse) has been a surprise hit, and is now by far the most-read post in this blog’s nearly-ten-year history. It evidently struck a chord with a lot of people [...] But I was brought up short by this tweet from Thomas Koenig: [honest questions: where does this incentivizing come from? who deems it necessary?] [...] I think we can fruitfully speculate on the underlying problem. [...]

> First the things we really care about are hard to measure. The reason we do science — or, at least, the reason societies fund science — is to achieve breakthroughs that benefit society. That means important new insights, findings that enable new technology, ways of creating new medicines, and so on. But all these things take time to happen. It’s difficult to look at what a lab is doing now and say “Yes, this will yield valuable results in twenty years”. Yet that may be what is required: trying to evaluate it using a proxy of how many papers it gets into high-IF journals this year will most certainly mitigate against its doing careful work with long-term goals.

> Second we have no good way to reward the right individuals or labs. What we as a society care about is the advance of science as a whole. We want to reward the people and groups whose work contributes to the global project of science — but those are not necessarily the people who have found ways to shine under the present system of rewards: publishing lots of papers, shooting for the high-IF journals, skimping on sample-sizes to get spectacular results, searching through big data-sets for whatever correlations they can find, and so on. [...]

> Given metrics’ terrible track-record of hackability, I think we’re now at the stage where the null hypothesis should be that any metric will make things worse. There may well be exceptions, but the burden of proof should be on those who want to use them: they must show that they will help, not just assume that they will.

> And what if we find that every metric makes things worse? Then the only rational thing to do would be not to use any metrics at all. Some managers will hate this, because their jobs depend on putting numbers into boxes and adding them up. But we’re talking about the progress of research to benefit society, here.

Post has attachment
> You were going to get one-click access to the full text of nearly every book that’s ever been published. Books still in print you’d have to pay for, but everything else—a collection slated to grow larger than the holdings at the Library of Congress, Harvard, the University of Michigan, at any of the great national libraries of Europe—would have been available for free at terminals that were going to be placed in every local library that wanted one. [...] Books would become as instantly available, searchable, copy-pasteable—as alive in the digital world—as web pages. [...]

> At the heart of the settlement was a collective licensing regime for out-of-print books. Authors and publishers could opt out their books at any time. For those who didn’t, Google would be given wide latitude to display and sell their books, but in return, 63 percent of the revenues would go into escrow with a new entity called the Book Rights Registry. The Registry’s job would be to distribute funds to rightsholders as they came forward to claim their works; in ambiguous cases, part of the money would be used to figure out who actually owned the rights. [...]

> What became known as the Google Books Search Amended Settlement Agreement came to 165 pages and more than a dozen appendices. It took two and a half years to hammer out the details. Sarnoff described the negotiations as “four-dimensional chess” between the authors, publishers, libraries, and Google. “Everyone involved,” he said to me, “and I mean everyone—on all sides of this issue—thought that if we were going to get this through, this would be the single most important thing they did in their careers.” [...]

> In a statement filed with the court, the DOJ argued that the settlement would give Google a de facto monopoly on out-of-print books. [...] Whatever the motivation, the DOJ said its piece and that seemed to carry the day. In his ruling concluding that the settlement was not “fair, adequate, and reasonable” under the rules governing class actions, Judge Denny Chin recited the DOJ’s objections and suggested that to fix them, you’d either have to change the settlement to be an opt-in arrangement—which would render it toothless—or try to accomplish the same thing in Congress. [...]

> The irony is that so many people opposed the settlement in ways that suggested they fundamentally believed in what Google was trying to do. One of Pamela Samuelson’s main objections was that Google was going to be able to sell books like hers, whereas she thought they should be made available for free. (The fact that she, like any author under the terms of the settlement, could set her own books’ price to zero was not consolation enough, because “orphan works” with un-findable authors would still be sold for a price.) [...] Many of the objectors indeed thought that there would be some other way to get to the same outcome without any of the ickiness of a class action settlement. [...] Of course, nearly a decade later, nothing of the sort has actually happened. “It has got no traction,” Cunard said to me about the Copyright Office’s proposal, “and is not going to get a lot of traction now I don’t think.” [...]

> It certainly seems unlikely that someone is going to spend political capital—especially today—trying to change the licensing regime for books, let alone old ones. [...] Allan Adler, in-house counsel for the publishers, said to me, “a deep pocketed, private corporate actor was going to foot the bill for something that everyone wanted to see.” Google poured resources into the project, not just to scan the books but to dig up and digitize old copyright records, to negotiate with authors and publishers, to foot the bill for a Books Rights Registry. Years later, the Copyright Office has gotten nowhere with a proposal that re-treads much the same ground, but whose every component would have to be funded with Congressional appropriations.
Wait while more posts are being loaded