weekly reel November 25, 2018
Decanted Youth - Live Red Hook Open Studios - Nov 11 2018 (youtube-nocookie.com)
On the news,
- Scott Alexander / SSC: The economic perspective on moral standards.
- Simon Sinek: Most leaders don't even know the game they are in, on millenial workers, management, empathy. Thx bro.
- Colling Morris: Are pop lyrics getting more repetitive?, a question answered using ... data compression! Cool.
- [fr] Bure : le zèle nucléaire de la justice , ou comment le gouvernement français utilise contre une poignée d’écologistes tout l’arsenal de surveillance anti-terroriste. Via Nitot.
- Kottke: Chinese scientists are creating CRISPR babies.
- Facebook is a heap of organizational garbage; pick your language:
- Camille Fournier: I hate manager READMEs, 👍.
- Bloomberg: Beijing to judge all residents based on behavior by 2020.
- Ben Wander's quest to become a household name, thx Heinz.
"I think a lot of people know really famous directors for film, really famous authors. Games don't really get that. Coming from AAA, I get it -- we made games with 300 people, right? It's not one person. But especially if you're an indie game maker, either yourself or with like a really tiny group of people, I think there's no reason your name shouldn't be on the front of the tin."
- TNW: It takes 3 times more energy to mine Bitcoin than gold.
- Last but not least, NLR interviews Richard Stallman and it's a good one with interesting socio-political angles. On trust:
[MIT lab leader] Marvin Minsky didn’t like having doors locked, because he had a tendency to lose his keys. So the doors to the Lab and all the offices inside it were always open. There were no passwords for the time-sharing system. There was no file protection—literally: anybody could sit down at any console and do anything. [...]
The point is that when people share a computer, either they do so as a community, where they trust each other and resolve disputes, or it’s run like a police state, where there are a few who are the masters, who exercise total power over everyone else. [...]
Our way of dealing with kids coming in over Arpanet was to socialize them. We all participated in that. For example, there was a command you could type to tell the system to shut down in five minutes. The kids sometimes did that, and when they did we just canceled the shutdown. They were amazed. They would read about this command and think, surely it’s not going to work, and would type it—and get an immediate notification: ‘The system is shutting down in five minutes because of . . . ’
[Then] there was always a real user, who would just cancel the shutdown and say to that person, ‘Why did you try to shut the machine down? You know we’re here using it. You only do that if there’s a good reason.’ And the thing is, a lot of those people felt outcast by society—they were geeks; their families and their fellow students didn’t understand them; they had nobody. And we welcomed them into the community and invited them to learn and start to do some useful work. It was amazing for them not to be treated as trash.
On the political inclinations of free software:
Interviewer: One thing that’s striking about that culture—which is legendary in the history of computing—is that it flourished in an institution largely funded by the American Defense Department. Do you find it paradoxical that this sort of freedom could develop under the carapace of the Pentagon?
RMS: It’s paradoxical, but it actually makes perfect sense. They wanted to fund some research. They didn’t need to make it be done by jerks and downtrodden people—they just wanted it to get done. During the seventies, a number of the hackers at the ai Lab were bothered by the fact that it was funded by the us military. I thought that what mattered was what we were doing, not where the money came from, and at some point I reached a conclusion that funding from business was much worse.
I: Worse than funding from the military?
RMS: Much worse, because the businesses would try to restrict the use of what you did.
I: Nevertheless, this was the state that was bombing Vietnam.
RMS: Yes, it was. I was against the Vietnam War, just like everyone else in the Lab, but we weren’t helping them bomb Vietnam. Our work wasn’t particularly military, or even likely to be used in the short term. For instance, Greenblatt did a lot of work on chess programs; I mostly worked on improving various system programs—I developed the first Emacs text editor during that time.
On AI "data-race" and its compatibility with a privacy-respectful society:
I: How is the free-software movement related politically to other issues—does it have any natural allies?
RMS: Free software combines capitalist, socialist and anarchist ideas. The capitalist part is: free software is something businesses can use and develop and sell. The socialist part is: we develop this knowledge, which becomes available to everyone and improves life for everyone. And the anarchist part: you can do what you like with it. I’m not an anarchist—we need a state so we can have a welfare state. I’m not a ‘libertarian’ in the usual American sense, and I call them rather ‘antisocialists’ because their main goal is a laissez-faire, laissez-mourir economy. People like me are the true libertarians. I supported Bernie Sanders for President—Clinton was too right-wing for me—and the Green Party.
I: Would it be quite right to say there’s no anti-capitalist dynamic to free software? After all, capitalism proper involves excluding most of the population from means of production, and free software makes such means readily available to anyone. Market exchange is another matter, and could also characterize libertarian socialism, for example.
RMS: As I understand the term capitalism, it doesn’t necessarily mean that there are quasi-monopolies or oligopolies that politically dominate the world. I do condemn the current system of plutocracy very strongly. When I talk about capitalism, I mean private business. There is a difference between the economic system that the us has now and what it had in 1970. There are two different forms of capitalism, you might say—this one I call extreme capitalism, or plutocracy, in which businesses dominate the state. I’m definitely against plutocracy, but I don’t wish to identify capitalism with plutocracy, because there are other forms of capitalism that I have seen in my life. Basically, businesses shouldn’t decide our laws.
I: So your political orientation would be to some kind of social democracy with a mixed economy?
RMS: Yes. But that’s economic; and my political orientation is democracy and human rights. But to have democracy means the people control the state, which means the businesses don’t. So, in terms of economy, yes, I favour having private businesses. I don’t think state restaurants could have made the meal we just had.
I: It’s often argued now that attempts to deter American corporations from massive data collection, in the name of privacy or civil liberties, will allow China to win the new ‘space race’ in machine-learning and artificial intelligence. What’s your response to that?
RMS: Freedom and democracy are more important than advancing technology. If China and the us are in a race for Orwellian tyranny, I hope the us loses. Indeed, the us should drop out of the race as soon as possible. Our society has been taught to overestimate the importance of ‘innovation’. Innovations may be good, and they may be bad. If we let companies decide which innovations we will use, they will choose the ones that give them more of an advantage over us.
I: A complementary argument is that Silicon Valley may have the advantage over China in terms of innovation, but that the next stage in AI will be about implementation—the sheer amount of data an organization can run through the algorithms, and the scale of its processing power—where China has the advantage of a population four times greater than the us. (This seems to be the claim of Kai-fu Lee in AI Superpowers.) How do you view these developments? What are the implications in terms of, first, software freedoms, and second, civil liberties?
RMS: Such AIs will work for companies; their use will be to help companies manipulate people better and dominate society more. I think we should restrict the collection of data about people, other than court-designated suspects. If that holds the companies back in developing AI techniques to help them dominate society, so much the better.