We occasionally see questions about what JS-based sites can do and still be visible in search, so here's a brief summary for today's state:
# Don't cloak to Googlebot. Use "feature detection" & "progressive enhancement"  techniques to make your content available to all users. Avoid redirecting to an "unsupported browser" page. Consider using a polyfill or other safe fallback where needed. The features Googlebot currently doesn't support include Service Workers, the Fetch API, Promises, and requestAnimationFrame.
# Use rel=canonical  when serving content from multiple URLs is required.
# Avoid the AJAX-Crawling scheme on new sites. Consider migrating old sites that use this scheme soon. Remember to remove "meta fragment" tags when migrating. Don't use a "meta fragment" tag if the "escaped fragment" URL doesn't serve fully rendered content. 
# Avoid using "#" in URLs (outside of "#!"). Googlebot rarely indexes URLs with "#" in them. Use "normal" URLs with path/filename/query-parameters instead, consider using the History API for navigation.
# Use Search Console's Fetch and Render tool  to test how Googlebot sees your pages. Note that this tool doesn't support "#!" or "#" URLs.
I hope this was useful! Let me know if I missed anything, or if you need clarifications for any part.
 PWA: https://developers.google.com/web/progressive-web-apps
 Progressive enhancement: https://en.wikipedia.org/wiki/Progressive_enhancement
 rel=canonical: https://support.google.com/webmasters/answer/139066
 AJAX Crawling scheme: https://developers.google.com/webmasters/ajax-crawling/docs/specification