Profile cover photo
Profile photo
NITRD Program
The Networking and Information Technology Research and Development (NITRD) Program
The Networking and Information Technology Research and Development (NITRD) Program

Post has attachment

Post has attachment
LSN IWG: Large Scale Networking (LSN) Workshop On Operationalizing SDN

The Large Scale Networking (LSN) Workshop on Operationalizing SDN will be held from September 18 until September 20 in Washington, D.C. The workshop is in sequel to two previous workshops in 2013 and 2015, with the purpose to invite participants from the academia, industry, open source software communities, and government agencies to discuss the current state and forward pathways for the realization of an open, innovative, multi-domain and inter-operable SDN as an operational infrastructure. The workshop intends to facilitate an objective and constructive dialogue among different SDN stakeholder communities. It will provide input for LSN to identify opportunities and strategies towards the next important milestones for operational SDN.

Faculty, PhD students, researchers, visionary and technical members of the industry, and government agencies are welcome. Travel expenses will be supported for a limited number of US-based researchers and students. The workshop will strive for diversity in participation.

Registration is open until filled. For more information and to participate in the workshop, please visit

To learn more about the Large Scale Networking (LSN) Interagency Working Group (IWG) please visit LSN IWG home page:

Post has attachment

Open Knowledge Network
3rd Workshop on an Open Knowledge Network:
Enabling the Community to Build the Network

October 3, 2017 - October 4, 2017
National Institutes of Health (NIH)

Starting in July 2016, the Big Data Interagency Working Group (BD IWG) leadership has been involved in two meetings to discuss the viability, and possible first steps to creating a joint public/private open data network infrastructure, the Open Knowledge Network (OKN). The vision of OKN is to create an open knowledge graph of all known entities and their relationships, ranging from the macro (have there been unusual clusters of earthquakes in the US in the past six months?) to the micro (what is the best combination of chemotherapeutic drugs for a 56 y/o female with stage 3 glioblastoma and an FLT3 mutation but no symptoms of AML?). OKN is meant to be an inclusive, open, community activity resulting in a knowledge infrastructure that could facilitate and empower a host of applications and open new research avenues including how to create trustworthy knowledge networks/graphs.

A third workshop is planned for October 3-4 at NIH. This workshop will examine what OKN related projects the Federal Agencies are already involved in, how those projects can collaborate with private sector efforts, and what the next steps need to be. The workshop will focus on particular domains, and discuss how to enable an open contributing community. The meeting is by invitation only, but the plenary sessions will be available via webcast.

For more information, please refer to the OKN White Paper, and the OKN Next Steps documents ( ) that were produced as a result of the first two meetings

To learn more about the Big Data Interagency Working Group (IWG) please visit Big Data IWG home page (

Post has attachment
NSF Data Science Series: "How Predictable is the Spread of Information?"

July 19, 2017 / Room 110 / 1pm-2pm

Speaker: Jake Hofman, Senior Researcher at Microsoft Research in New York City

"How does information spread in online social networks, and how predictable are online information diffusion events? Despite a great deal of existing research on modeling information diffusion and predicting "success" of content in social systems, these questions have remained largely unanswered for a variety of reasons, ranging from the inability to observe most word-of-mouth communication to difficulties in precisely and consistently formalizing different notions of success.

This talk will attempt to shed light on these questions through an empirical analysis of billions of diffusion events under one simple but unified framework. We will show that even though information diffusion patters exhibit stable regularities in the aggregate, it remains surprisingly difficult to predict the success of any particular individual or single piece of content in an online social network. Evidence from our simulations further suggests that, rather than resulting from any shortcomings in our estimates or models, this unpredictability may be a hallmark of the information diffusion process itself.


Jake Hofman is a Senior Researcher at Microsoft Research in New York City, where he works in the field of computational social science. Prior to joining Microsoft, he was a Research Scientist in the Microeconomics and Social Systems group at Yahoo! Research. He is an Adjunct Assistant Professor of Applied Mathematics and Computer Science at Columbia University and runs Microsoft's Data Science Summer School to promote diversity in computer science. He holds a B.S. in Electrical Engineering from Boston University and a Ph.D. in Physics from Columbia University."

Post has attachment
"The American Association for the Advancement of Science (AAAS) Science & Technology Policy Fellowship Big Data Affinity Group, in collaboration with the South Big Data Hub and West Big Data Innovation Hub, invite you to a data visualization and storytelling event July 14, 2017 from 8:30 am to 4:00 pm ET in Washington, DC. This event is the second in The Science of Data-Driven Storytelling workshop series (#datascistories). The meeting brings together a community of policymakers and data storytellers to visualize insights from data that generate effective communication.

Since this event is currently full, you can participate via WebEx for the plenary sessions (9am-12pm and 2:50pm-4pm ET). Call-in number: 1-415-655-0003; Event number: 641 886 660; Event password: dataviz. Please note that you will be asked to register, so allow extra time before the event starts."

Post has attachment

Post has attachment

Department of Energy Awards Six Research Contracts Totaling $258 Million to Accelerate U.S. Supercomputing Technology

JUNE 15, 2017

"WASHINGTON, D.C. - Today U.S. Secretary of Energy Rick Perry announced that six leading U.S. technology companies will receive funding from the Department of Energy’s Exascale Computing Project (ECP) as part of its new PathForward program, accelerating the research necessary to deploy the nation’s first exascale supercomputers.

The awardees will receive funding for research and development to maximize the energy efficiency and overall performance of future large-scale supercomputers, which are critical for U.S. leadership in areas such as national security, manufacturing, industrial competitiveness, and energy and earth sciences. The $258 million in funding will be allocated over a three-year contract period, with companies providing additional funding amounting to at least 40 percent of their total project cost, bringing the total investment to at least $430 million."

Read more:

Post has attachment

NSF Data Science Series
"A View of the Cloud enabling broad Data Science Education"
June 14, 2017 / Room 110 / 11am ET
Speaker:David E. Culler, University of California, Berkeley

Abstract: Two years ago, UC Berkeley launched a Data Science education program with a goal of bringing computational and inferential thinking in the context of real-world questions and data to the entire undergraduate community, as well as developing depth in the emerging discipline and an undergraduate major. The program has grown from zero to two thousand students in two years, starting from a freshman-level Foundations of Data Science course and growing out into a network of two dozen 'connector' and advanced courses. A key technological component of this effort is the use of the cloud reduce the barrier to entry for students, especially those not pursuing computer science related studies, and for faculty seeking to stand up a course or a data science module in an existing course. This view of the cloud as enabler extends through several aspects of the data science learning experience. The student need no more than a browser to open this domain of learning. All lectures, labs, assignments take place as a hosted Jupyter notebook. Each unfolds as a kind of computational narrative, starting from a question and relevant raw data, evolving through various visualizations and analyses to reach an observation or conclusion - a very different introductory programming experience. The infrastructure behind is sophisticated and designed to scale, but a new instructor need to do little more than populate a github repository. Other tools and services, including authentication, storage, auto-grading, assisted learning, become part of the learning environment. Advanced courses expose students to more of the technology they have been utilizing and out in the world. But, equally important is the social networks among faculty, researchers, and students that cross institutional boundaries and serve to disseminate experiences, methods and understandings.

Post has attachment

Post has attachment

Dr. Robert Bohn on "High Quality Metric-Based SLAs for Cloud Computing" @NITRDgov FASTER CoP, 06/22/2017.

Abstract: The ecosystem of cloud computing is surrounded by a flurry of technical issues (availability, performance, data management) which in turn create challenges for procurement of cloud services. Moreover, to embrace the cloud model, the ability to measure the aspects of the service is crucial. A generic Service Level Agreement from a cloud provider cannot be directly compared to another since many times, there are differences in the vocabulary, the meanings and other terms. The community needs to have a standard dictionary of terms, reusable components and a series of metrics that can be validated. This talk will encompass the work that NIST has performed in collaboration with ISO/IEC to create SLA standards and metrics. In addition, the NIST team is working to leverage these standards to create a USG SLA base model that agencies can use for their procurements.
Wait while more posts are being loaded