The experiment runs on contrastive results (is it concept A or B?) rather than attempting to recognize an isolated word (which concept is this from a set of x possible results?) and the news highlight the best results only, however, the fact that different, unrelated languages, are used to underpin the deep structure of a concept (based on brain activity) while still working with linguistic units as present in natural language, reporting positive results, points out towards all human languages bearing a common structure that runs the same for all languages.
According to the news and references provided within, the research team translated Japanese transcripts of dreams into English and established relationships for semantic fields to reduce the more generic terms to the most specific, using a lexical database for the English language. Then, a new English language database provided pictures for the concepts that formerly appeared in “the originally Japanese language version” of the dreams, to record the brain activity of the same subjects while watching images extracted using English language related words. The team finally built a model to predict which category of contents appeared in the original records of the (Japanese) dream.
If a model to predict an event (localized brain activity) that corresponds to specific linguistic utterances (Japanese) relies on unrelated English language to build the whole “linguistic” relationships, it is expected that the same could be done for any other human language.
Two different issues converge (for this linguist here): English as a lingua franca, and also as a basis to build "universal grammars". A natural language that becomes universally spoken or sophisticated tools that translate any language utterance (or thoughts)? Which one is first?