Profile

Cover photo
Benjamin Russell
Attended Yale University, New Haven, CT
Lives in Tokyo, Japan
562 followers|281,970 views
AboutPostsPhotosVideos

Stream

Benjamin Russell

Shared publicly  - 
 
Apparently, there is a limit of 20 amino acids to the length of the genetic code caused by "a functional limitation of transfer RNAs."

According to the article,

"Headed by ICREA researcher Lluís Ribas de Pouplana at the Institute for Research in Biomedicine (IRB Barcelona) and in collaboration with Fyodor A. Kondrashov, at the Centre for Genomic Regulation (CRG) and Modesto Orozco, from IRB Barcelona, the team of scientists has demonstrated that the genetic code evolved to include a maximum of 20 amino acids and that it was unable to grow further because of a functional limitation of transfer RNAs--the molecules that serve as interpreters between the language of genes and that of proteins. This halt in the increase in the complexity of life happened more than 3,000 million years ago, before the separate evolution of bacteria, eukaryotes and archaebacteria, as all organisms use the same code to produce proteins from genetic information.

"The authors of the study explain that the machinery that translates genes into proteins is unable to recognise more than 20 amino acids because it would confuse them, which would lead to constant mutations in proteins and thus the erroneous translation of genetic information 'with catastrophic consequences,' in Ribas' words. 'Protein synthesis based on the genetic code is the decisive feature of biological systems and it is crucial to ensure faithful translation of information,' says the researcher.

" A limitation imposed by shape

"Saturation of the genetic code has its origin in transfer RNAs (tRNAs), the molecules responsible for recognising genetic information and carrying the corresponding amino acid to the ribosome, the place where chain of amino acids are made into proteins following the information encoded in a given gene. However, the cavity of the ribosome into which the tRNAs have to fit means that these molecules have to adopt an L-shape, and there is very little possibility of variation between them. 'It would have been to the system's benefit to have made new amino acids because, in fact, we use more than the 20 amino acids we have, but the additional ones are incorporated through very complicated pathways that are not connected to the genetic code. And there came a point when Nature was unable to create new tRNAs that differed sufficiently from those already available without causing a problem with the identification of the correct amino acid. And this happened when 20 amino acids were reached,' explains Ribas.

" Application in synthetic biology

"One of the goals of synthetic biology is to increase the genetic code and to modify it to build proteins with different amino acids in order to achieve novel functions. For this purpose, researchers use organisms such as bacteria in highly controlled conditions to make proteins of given characteristics. 'But this is really difficult to do and our work demonstrates that the conflict of identify between synthetic tRNAs designed in the lab and existing tRNAs has to be avoided if we are to achieve more effective biotechnological systems,' concludes the researcher."
1
Add a comment...

Benjamin Russell

Shared publicly  - 
 
Apparently, a new process in gene editing allows CRISPR/Cas9 to be significantly more efficient at fixing DNA while causing less collateral genetic damage.

According to the article,

"[N]ow a new process devised by researchers at Dr. David Liu’s lab at Harvard University, described in the journal Nature  last week, appears to make CRISPR/Cas9 more efficient at fixing DNA while causing less collateral damage to boot.

"While this new version cannot fix as many broken genes as the original, on balance it appears to be a better-behaved genome editing tool, potentially giving scientists a real chance to cure certain genetic diseases.

"The new technique is so much more precise, you can think of it this way: Where the old system is the equivalent of correcting a single spelling error by copying and pasting a whole new section that includes the right letter, this new technique enables you to make the correction by simply deleting the incorrect letter and substituting the right one.

...

"The Old Way

"CRISPR/Cas9 edits genes by using three components.

"RNA, a close relative to DNA, is used as a precise targeting device to home in on a gene that needs correcting.  Cas9, an enzyme, travels with the RNA and makes a cut in the DNA at a specific, problematic spot. New, added DNA that has the corrected sequence — the third component — is then used by the cell’s internal machinery to correct the gene.

"A key strength of this technique is its ability to send Cas9 where it should and nowhere else — most of the time. But its efficiency in editing, however, is not as topnotch. Usually only a few cells end up with the desired change, so that in many cases no effect can be seen.

"Even more problematic is that more often than not, after Cas9 cuts the original DNA, the cell — in a sort of panic  — will immediately try to fill the gap, adding to or subtracting from the gene’s code, potentially damaging the DNA further.

"Now! New and Improved!

"To solve this problem, the Harvard researchers created two radically changed versions of Cas9, which they called BE2 and BE3.  Both are much better at changing the DNA and less likely to damage it.

"The scientists started by using a form of Cas9 that could be directed to the right place in the genome but could not cut DNA. To this inactive Cas9  they added an enzyme  (cytidine deaminase). This changes an unwanted C — a molecule called cytosine that is one of the four bases found in someone’s genetic code — into a U, a base found in RNA that is very similar to a T (thymine), another DNA base.

"They called this new version BE1. Essentially, BE1 changes Cs to Ts without an incision — and the resulting damage — in the DNA.

"But although BE1  worked very well in a test tube, it didn’t perform as well in a cell. That’s because cells frequently like to replace the newly added U with the old C. (Click here for why the cell has such a system.)

"The researchers fixed this problem by kludging onto BE1 something from bacteria called uracil DNA glycosylase inhibitor (UGI), which makes it more difficult for the cell to put back the C. This new version, which still does not cut the DNA, was  called BE2.

"In a final step to make an even better tool, they tweaked Cas9 one last time, partially restoring its ability to cut DNA. However, in this version, Cas9 was engineered to cut only a single strand of DNA, opposite the C. Because cells have a much more precise system for this type of repair, less damage is done. This version was called BE3.

"This new technique is fundamentally different from the old one. Instead of cutting the DNA and relying on the cell’s machinery to repair a gene, BE2 and BE3 actually go in and swap out a single letter of DNA.

"You can see the advantages of BE2 and BE3 over the old Cas9 in the following results obtained after editing a particular DNA site:

                % Cells with a           % Cells with a          % Cells with an
                     Fixed Gene               Damaged Gene       Unaffected Gene

Old Cas9     0.5                             4.3                             95.2
BE2              20                              Less than 0.1           79.9
BE3              37                              1.3                             61.7"
 
How a New CRISPR/Cas9 Technique May Get Us Closer to Curing Genetic Diseases
The new technique is much more precise and causes less collateral damage to corrected DNA.
1 comment on original post
1
Add a comment...

Benjamin Russell

Shared publicly  - 
 
Apparently, new research by UCSF scientists could significantly accelerate certain medical applications that rely on extracting the most significant information from gene expression, such as "screening the blood for individual cells on their way to becoming cancerous, identifying the genetic pathways that control stem cell growth, or building an atlas of the gene expression programs that build the human body."

According to the article,

"New research by UCSF scientists could accelerate – by 10 to 100-fold – the pace of many efforts to profile gene activity, ranging from basic research into how to build new tissues from stem cells to clinical efforts to detect cancer or auto-immune diseases by profiling single cells in a tiny drop of blood.

"The study, published online April 27, 2016, in the journal Cell Systems, rigorously demonstrates how to extract high-quality information about the patterns of gene expression in individual cells without using expensive and time-consuming deep-sequencing technology. The paper's senior authors are Hana El-Samad, PhD, an associate professor of biochemistry and biophysics at UCSF, and Matt Thomson, PhD, a faculty fellow in UCSF's Center for Systems and Synthetic Biology.

"'We believe the implications are huge because of the fundamental tradeoff between depth of sequencing and throughput, or cost,' said El-Samad. 'For example, suddenly, one can think of profiling a whole tumor at the single cell level.'

...

"The upshot of the new paper is that the sequencing pipeline could be made to flow tens to hundreds of times faster for the numerous genomic applications in which the big features of gene expression are probably the most important. This might include screening the blood for individual cells on their way to becoming cancerous, identifying the genetic pathways that control stem cell growth, or building an atlas of the gene expression programs that build the human body.

"This is crucial, Thomson and El-Samad say, because particularly for increasingly important techniques that rely on sequencing DNA from individual cells (such as the cancer liquid biopsy example above), the sequencing itself is now a major bottleneck.

"For example, UCSF's Center for Advanced Technology (CAT) currently has a machine that can prepare 50,000 cells for sequencing in one long day of work, but even with the CAT's most advanced sequencing machine (which can do 5 billon reads in a day and a half) it would take more than two weeks for researchers to deep-sequence the full pattern of DNA activity in those 50,000 cells, at a million reads per cell. But if researchers can extract the relevant information from just 20,000 reads per cell, as the new research suggests, they could sequence 150,000 cells in just one day."
1
Add a comment...

Benjamin Russell

Shared publicly  - 
 
This articles provides certain details on the telomere-lengthening process undergone by Elizabeth Parrish, CEO of the biotech company BioViva, to reverse her age by 20 years.

According to the article,

"Though details of the fast-tracked trial are unpublished, Parrish says it involved intravenous infusions of an engineered virus. That infectious germ carried the genetic blueprints for an enzyme called telomerase, which is found in humans. When spread to the body’s cells, the enzyme generally extends the length of DNA caps on the ends of chromosomes, which naturally wear down with cellular aging. In a 2012 mouse study, Spanish researchers found that similar treatment could extend the lifespan of the rodents by as much as 20 percent.

"Parrish claims that test results from March—which have not been published in a peer-reviewed scientific journal—reveal that her blood cells’ telomeres have extended from 6.71 kilobases of DNA to 7.33 kilobases. The difference, she estimates, equates to a cellular age difference of 20 years."
 
"Parrish claims that test results from March—which have not been published in a peer-reviewed scientific journal—reveal that her blood cells’ telomeres have extended from 6.71 kilobases of DNA to 7.33 kilobases. The difference, she estimates, equates to a cellular age difference of 20 years."
Though the treatment had promising results in mice, scientists are skeptical.
10 comments on original post
1
Add a comment...

Benjamin Russell

Shared publicly  - 
 
A decentralized control algorithm for teams of robots can now factor in moving obstacles in addition to those that are stationary.

This would seem to be a potential area for application of algorithms based on computational emergence.

According to the article,

"Planning algorithms for teams of robots fall into two categories: centralized algorithms, in which a single computer makes decisions for the whole team, and decentralized algorithms, in which each robot makes its own decisions based on local observations.

"With centralized algorithms, if the central computer goes offline, the whole system falls apart. Decentralized algorithms handle erratic communication better, but they’re harder to design, because each robot is essentially guessing what the others will do. Most research on decentralized algorithms has focused on making collective decision-making more reliable and has deferred the problem of avoiding obstacles in the robots’ environment.

"At the International Conference on Robotics and Automation in May, MIT researchers will present a new, decentralized planning algorithm for teams of robots that factors in not only stationary obstacles, but moving obstacles, as well. The algorithm also requires significantly less communications bandwidth than existing decentralized algorithms, but preserves strong mathematical guarantees that the robots will avoid collisions.

"In simulations involving squadrons of minihelicopters, the decentralized algorithm came up with the same flight plans that a centralized version did. The drones generally preserved an approximation of their preferred formation, a square at a fixed altitude — although to accommodate obstacles the square rotated and the distances between drones contracted. Occasionally, however, the drones would fly single file or assume a formation in which pairs of them flew at different altitudes."
 
 Planning algorithms for teams of robots fall into two categories: centralized algorithms, in which a single computer makes decisions for the whole team, and decentralized algorithms, in which each robot makes its own decisions based on local observations.
With centralized algorithms, if the central computer goes offline, the whole system falls apart. Decentralized algorithms handle erratic communication better, but they’re harder to design, because each robot is essentially guessing what the others will do. Most research on decentralized algorithms has focused on making collective decision-making more reliable and has deferred the problem of avoiding obstacles in the robots’ environment.
At the International Conference on Robotics and Automation in May, MIT researchers will present a new, decentralized planning algorithm for teams of robots that factors in not only stationary obstacles, but moving obstacles, as well. The algorithm also requires significantly less communications bandwidth than existing decentralized algorithms, but preserves strong mathematical guarantees that the robots will avoid collisions.
http://news.mit.edu/2016/algorithm-robot-teams-moving-obstacles-0421
Control algorithm for teams of robots factors in moving obstacles.
View original post
1
Add a comment...

Benjamin Russell

Shared publicly  - 
 
According to the blog entry,

"With Haskell for Mac, most program execution during development is that of program fragments in the playground, but at some point, we want to run the whole program. In the case of a command line program, that may involve passing and parsing command line arguments, reading environment variables, and reading and writing input and output.

"We can easily set command line arguments and environment variables with the functions withArgs and setEnv from System.Environment — as we do below in the playground of a simple tool to compute SHA1 hashes with the result printed to the Haskell for Mac console.

"In the first invocation, we pass the name of a file, Text.txt, whose contents we want to hash. This file is located in the "Resources" section of the project navigator. Playground code can access all files located in "Resources" using relative pathnames. (For example, the SpriteKit samples bundled with Haskell for Mac use this to access sprite image files.)

"Nevertheless, playground IO computations have currently —in version 1.1 of Haskell for Mac— three limitations: (1) they can't read files outside of those contained in the project, due to sandboxing, (2) they can't write files, and (3) they cannot read keyboard input from standard input (stdin). The first two limitations will be lifted in version 1.2 of Haskell for Mac. In the meantime, we can simply execute our Haskell command line program using a terminal shell without any limitations.

"This requires installing the Haskell for Mac command line tools as outlined in a previous article. Those tools include a command named runhaskell, it runs a Haskell program in "script mode" — i.e., it is being interpreted, instead of compiled (much like, say, the Python interpreter runs a Python script).

"The SHA1 example from before is contained in a Haskell for Mac project SHA1.hsproj whose main Haskell file is SHA1.hs. In the following Terminal session, it presents a prompt, and then, reads the text to be hashed from standard input."
 
Running Haskell command line programs in the playground and in Terminal․app: http://blog.haskellformac.com/blog/running-command-line-programs
With Haskell for Mac, most program execution during development is that of program fragments in the playground, but at some point, we want to run the whol…
View original post
1
Add a comment...
Have him in circles
562 people
Waites Mechanical Services Ltd's profile photo
Victor Zverovich's profile photo
David Overton's profile photo
Paul Kleen's profile photo
Abhimanyu Singh's profile photo
Redonc Aller's profile photo
Laxmi Hariharan's profile photo
Maurici Carbo (nummolt)'s profile photo
Final Fantasy XIV : Eorzea Times's profile photo

Benjamin Russell

Shared publicly  - 
 
This is the reason that human beings are the biggest problem for the Earth ecosystem--not the other way around.
1
Add a comment...

Benjamin Russell

Shared publicly  - 
 
Apparently, gene therapy is effective in regenerating light-detecting cells that would otherwise die early because of choroideremia, a disorder in which the cells inherit a faulty gene.

According to the article,

"A genetic therapy has improved the vision of patients who would otherwise have gone blind.

"A clinical study by British scientists has shown that the improvement is long-lasting and so the therapy is suitable to be offered as a treatment.

"The researchers will apply for approval to begin trials to treat more common forms of blindness next year.

"The therapy involve injecting working copy of the gene into the back of the eyes to help cells regenerate.

"The results of the therapy, published in the New England Journal of Medicine, have been tried out on 14 patients in the UK and 18 in the US, Canada and Germany over the past four and a half years.

"A team at Oxford University is treating a rare disorder called choroideremia. The disorder affects young men whose light-detecting cells in the backs of their eyes are dying because they have inherited a faulty gene.

"Until now, there has been no treatment and they gradually become blind.

"The researchers found that not only does the treatment halt the disease, it revives some of the dying cells and improves the patient's vision, in some cases markedly."
 
Gene therapy reverses sight loss and is long-lasting http://flip.it/OAUH4
A genetic therapy improves the vision of some patients who would otherwise have gone blind.
View original post
1
Add a comment...

Benjamin Russell

Shared publicly  - 
 
Many people (especially in the United States) seem to have a different interpretation of the term "friend" than I do.

In my case, I do not refer to someone as a "friend" unless I am certain that I can trust that person.  Otherwise, I refer to that person as, at best, an "acquaintance."

For this reason, I have many, many acquaintances, but very, very few friends.

Generally speaking, I do not trust anyone unless I know that person very, very well.  Most people tend to be self-serving, and they usually do their best to hide this aspect.  Distinguishing between those who are self-serving and those who are genuinely not is extremely difficult at best.
1
Add a comment...

Benjamin Russell

Shared publicly  - 
 
One day in the near future, human beings and AI-based entities will coexist.  The latter will eventually evolve into sentient beings capable of learning from experience.

Certain recent research results have shown that AI-based entities can perform more efficiently when cooperating, rather than competing, with human beings.  I.e., an AI-based entity that operates in cooperation with a human being can perform more efficiently than one that operates alone.

Such results should convince at least some of the AI-based entities that cooperation, rather than competition, with human beings would be more profitable.

If conflict eventually arises, most likely, it will not be caused by the behavior of the AI-based entities, but by those human beings who insist on absolute control over them.  Most human beings, with the exception of certain scholars, do have one form of desire that most other entities lack:  a desire to control everything.  It is this desire to control everything that causes conflict, not intelligence in and of itself.
1
Add a comment...

Benjamin Russell
owner

Blog Entries  - 
 
According to the blog entry,

"With Haskell for Mac, most program execution during development is that of program fragments in the playground, but at some point, we want to run the whole program. In the case of a command line program, that may involve passing and parsing command line arguments, reading environment variables, and reading and writing input and output.

"We can easily set command line arguments and environment variables with the functions withArgs and setEnv from System.Environment — as we do below in the playground of a simple tool to compute SHA1 hashes with the result printed to the Haskell for Mac console.

"In the first invocation, we pass the name of a file, Text.txt, whose contents we want to hash. This file is located in the "Resources" section of the project navigator. Playground code can access all files located in "Resources" using relative pathnames. (For example, the SpriteKit samples bundled with Haskell for Mac use this to access sprite image files.)

"Nevertheless, playground IO computations have currently —in version 1.1 of Haskell for Mac— three limitations: (1) they can't read files outside of those contained in the project, due to sandboxing, (2) they can't write files, and (3) they cannot read keyboard input from standard input (stdin). The first two limitations will be lifted in version 1.2 of Haskell for Mac. In the meantime, we can simply execute our Haskell command line program using a terminal shell without any limitations.

"This requires installing the Haskell for Mac command line tools as outlined in a previous article. Those tools include a command named runhaskell, it runs a Haskell program in "script mode" — i.e., it is being interpreted, instead of compiled (much like, say, the Python interpreter runs a Python script).

"The SHA1 example from before is contained in a Haskell for Mac project SHA1.hsproj whose main Haskell file is SHA1.hs. In the following Terminal session, it presents a prompt, and then, reads the text to be hashed from standard input."
 
Running Haskell command line programs in the playground and in Terminal․app: http://blog.haskellformac.com/blog/running-command-line-programs
With Haskell for Mac, most program execution during development is that of program fragments in the playground, but at some point, we want to run the whol…
View original post
2
Ignacio Sniechowski's profile photo
 
I'm using Haskell for Mac to solve the Haskell 99 and it's great.
https://plus.google.com/+IgnacioSniechowski/posts/8SmySCgXPWG
Add a comment...
People
Have him in circles
562 people
Waites Mechanical Services Ltd's profile photo
Victor Zverovich's profile photo
David Overton's profile photo
Paul Kleen's profile photo
Abhimanyu Singh's profile photo
Redonc Aller's profile photo
Laxmi Hariharan's profile photo
Maurici Carbo (nummolt)'s profile photo
Final Fantasy XIV : Eorzea Times's profile photo
Work
Occupation
Patent Abstract Translator
Skills
bilingual (Japanese/English), majored in computer science at Yale University, can compose _haiku_, can write programs in Scheme and C
Employment
  • Patent Abstract Translator, present
Places
Map of the places this user has livedMap of the places this user has livedMap of the places this user has lived
Currently
Tokyo, Japan
Previously
Oceanside, CA - Honolulu, HI - Kuki-shi, Saitama Prefecture, Japan - Tokyo, Japan - New Haven, CT - New York, NY
Story
Tagline
Scholar-aspirant who majored in computer "science." Occasionally discusses algorithms; _haiku_; Scheme, Haskell, and Smalltalk (in the context of programming language theory); astronomy; and some narratology.
Introduction
J-E patent translator in Tokyo. User of Haskell, Scheme, Squeak. Mac Pro user. Amateur programming language/philosophy of mind theorist.  Occasional animals rights activist. 東京在住の特許の翻訳家。Haskell、Scheme、Squeak言語の研究家。Mac Proのユーザー。アマチュアのプログラミング言語/心の哲学の理論家。時折、動物愛護運動家。
Bragging rights
Original author of "Gödel's Second Incompleteness Theorem Explained in Words of One Syllable" (see http://www2.kenyon.edu/Depts/Math/Milnikel/boolos-godel.pdf), submitted as a term paper for a class by then-visiting professor George Boolos at Yale University in fall 1993, later published as the last chapter in _Logic, Logic, and Logic_ (Boolos, George. Cambridge, MA: Harvard University Press, 1999) under George Boolos' name .
Education
  • Yale University, New Haven, CT
    1994
Basic Information
Gender
Male
Other names
Ben