Profile cover photo
Profile photo
Trevor Clarke
136 followers
136 followers
About
Communities and Collections
View all
Posts

Post is pinned.Post has attachment
I've said it before. Use Signal. Use Signal. Use Signal! The metadata they collect? The phone number you used to sign up (that's your identifier) and the last day you signed in (just the day, no hour, no minute). That's it.

“Why I told my friends to stop using WhatsApp and Telegram” @romaub https://medium.freecodecamp.com/why-i-asked-my-friends-to-stop-using-whatsapp-and-telegram-e93346b3c1f0
Add a comment...

Poor G+, you had a good run
Add a comment...

Post has attachment
Hmmm, Fog Creek has a new IDE/hosting/code example/community thing called Glitch. Why another one? There are plenty out there, but Fog Creek has made some really great tools in the past (Trello, stack overflow, etc.) so there's a good chance they've done a good job. I haven't had a lot of time to look at it yet, but I plan to check it out.
Add a comment...

Nice, as part of his 2015 TedX talk, Bryan Seely setup a google local listing for a new snowboard shop in the Oval Office called Edward's Snow Den
Add a comment...

There once was a chat app name AIM,
that was so fun it seemed like a game.
Your friends used it too,
Much more than ICQ.
But now your screen name seems pretty lame.
Add a comment...

Post has attachment
Time to plug for a minute. I've been playing around with hosting on Vultr lately and I'm really impressed. They've got a wide array of cost options from $2.50 a month cloud instances (although they are usually sold out...the $5 a month instances are generally available) through bare metal instances. The interface is clean, easy, and functional. Bandwidth, response, etc. are great for the price. (I'm hosting a couple of personal apps there so I haven't really taxed them) Very flexible installs including all the standard OS choices (linux flavors, freebsd, openbsd, coreos, windows), the more common apps and stacks (LAMP, LEMP, wordpress, nextcloud, etc.), and the ability to upload your own ISO.

They run entirely on Intel processors, are beta testing block storage, have storage oriented instances, and easy billing (they bill by the hour an instance is around and the per month price is the maximum charge for the month). They also offer full IPv6 connectivity with a routable /64, automatic backups, extra DDoS protection (an extra cost I believe), and virtual private networks for setting up clusters.

I'm attaching my affiliate link so that I get a little kickback if you decide to use them, but I'm plugging them because it really is a great service.
Add a comment...

Post has shared content
Whoa! Universal translator!
Linguistics team using Ohio Supercomputer Center to translate lesser-known languages

Scientists are developing technology for languages about which translators and linguists know nothing. Off the top of your head, how many languages can you name? Ten? Twenty? More? It is estimated there are more than 7,000 languages worldwide. For those involved in disaster relief efforts, the breadth and variety of that number can be overwhelming, especially when addressing areas with low resources. William Schuler, Ph.D., a linguistics professor at The Ohio State University, is part of a project called Low Resource Languages for Emergent Incidents (LORELEI), an initiative through the Defense Advanced Research Projects Agency (DARPA). The LORELEI program's goal is to develop technology for languages about which translators and linguists know nothing. As part of LORELEI, Schuler and his team are using the Ohio Supercomputer Center's Owens Cluster to develop a grammar acquisition algorithm to discover the rules of lesser-known languages, learning the grammars without supervision so disaster relief teams can react quickly. "We need to get resources to direct disaster relief and part of that is translating news text, knowing names of cities, what's happening in those areas," Schuler said. "It's figuring out what has happened rapidly, and that can involve automatically processing incident language." Schuler's team is working to build a Bayseian sequence model based on statistical analysis to discover a given language's grammar. It is hypothesized this parsing model can be trained to learn a language and make it syntactically useful.
Add a comment...

Post has attachment
Here's something you can do to do the net neutrality changes, and make your net use more secure as well.
Add a comment...

Anyone have a good package or template for designing lined stationary for inclusion in a text signature using TeX/LaTeX? I'm thinking about adapting newlfm.
Add a comment...

Post has shared content
Autoencoding Blade Runner

Reconstructing films with artificial neural networks. In this blog I detail the work I have been doing over the past year in getting artificial neural networks to reconstruct films — by training them to reconstruct individual frames from films, and then getting them to reconstruct every frame in a given film and resequencing it. The type of neural network used is an autoencoder. An autoencoder is a type of neural net with a very small bottleneck, it encodes a data sample into a much smaller representation (in this case a 200 digit number), then reconstructs the data sample to the best of its ability. The reconstructions are in no way perfect, but the project was more of a creative exploration of both the capacity and limitations of this approach. This work was done as the dissertation project for my research masters (MSci) in Creative Computing at Goldsmiths.
Add a comment...
Wait while more posts are being loaded