7 min read

Top Resources I follow to Stay Current in Robotics

Top Resources I follow to Stay Current in Robotics
Photo by Growtika / Unsplash

⚡ Top Resources Every Robotics Engineer Should Follow in 2026; The ones I rely on to stay current in robotics

  1. arXiv – Robotics Category
    arXiv is a free, open-access repository where researchers publish cutting-edge robotics and AI papers before they appear anywhere else. This is the fastest way to see what’s new in manipulation, SLAM, motion planning, robot learning, RL for control, and more.
    Daily updates. Zero fluff.

  2. IEEE Spectrum – Robotics
    Deep engineering-focused articles. Great for understanding how research translates into real products.

  3. Awesome Robotics (GitHub)
    A curated list of robotics libraries, frameworks, datasets, simulators, and tooling.
    If you're building something new, it probably links to what you need.

  4. NVIDIA Technical Blog – Robotics Section
    Great breakdowns on GPU-accelerated robotics, Isaac Sim, cuRobo, and state-of-the-art perception/AI pipelines.

  5. TechCrunch Robotics
    Great for startup news, new funding rounds, and market trends.
    Useful if you're tracking where robotics capital is flowing.

  6. Boston Dynamics Blog
    In-depth looks into their engineering decisions, control systems, and behind-the-scenes R&D.

  7. The Robot Report
    Covers industry news, company developments, product launches, and funding activity.
    Great for anyone tracking the business side of robotics.


Udacity offers a fully online Master's in AI: Check it out here:

Get 40% off with code "SINA40"

udacity.png
logo-udacity.png

1- Here are this Week's Robotics Engineering Jobs.


2- This Week's LinkedIn posts:

#executivepath #vp | Sina Pourghodrat (PhD)
The Tech VP’s Secret: 𝗝𝗼𝗯 𝗛𝗼𝗽𝗽𝗲𝗿 or Company Lifer? (Not so surprising) We analyzed the career paths of 𝟵𝟵 Tech VPs to find out exactly how many times they jumped companies on their way to the top. ----------------------------------------------------------- This is the 4th post in our series analyzing the career paths of top tech VPs. Here are our previous posts on: - Tech VP Educations: https://lnkd.in/eDG-VYn7 - Apple’s SVP of ML and AI Strategy: https://lnkd.in/eSSrGu2T ------------------------------------------------------------------------- Turns out, the route to the executive suite is less about hopping every 2 years and more about strategic tenure. 𝗧𝗵𝗲 𝗞𝗲𝘆 𝗖𝗮𝗿𝗲𝗲𝗿 𝗝𝘂𝗺𝗽𝘀 𝗗𝗮𝘁𝗮: The Median number of career jumps (company/role changes) jumps is 3. The Most Common (Mode) number of jumps is 2. The data shows that the majority of top VPs built deep, strategic expertise across a small number of key employers. The single largest group of VPs made just 2 major jumps! ------------------------------------------------------------ For those of you who’ve asked about a structured AI master’s program, Udacity’s fully online 𝗠𝗮𝘀𝘁𝗲𝗿’𝘀 𝗶𝗻 𝗔𝗜 is one worth checking out: https://lnkd.in/eTMFKN72 [See Comment] ------------------------------------------------------------ Follow along: more insights coming in this series. At the end of this series, we’ll release the full dataset. Comment if you’d like early access. #ExecutivePath #VP
ByteDance's Depth Anything 3: 3D Understanding Model Surpasses SOTA | Sina Pourghodrat (PhD) posted on the topic | LinkedIn
Great news for the #Robotics community! ByteDance recently dropped Depth Anything 3 on Hugging Face, and it’s arguably the most capable 3D understanding model we’ve seen. It solves the geometry puzzle (depth + rays) from an arbitrary number of inputs. No known camera poses? No problem. It handles the reconstruction automatically. Before getting into how it works, look at what it achieved against the previous SOTA (state-of-the-art) Meta’s VGGT (Visual Geometry Grounded Transformer): - 35% Better Camera Pose Accuracy - 23% Better Geometric Accuracy - New SOTA on ScanNet++, ETH3D, and 7Scenes Here is the breakdown in plain English. The Old Way (The Problem) Previously, if you wanted a robot to understand the 3D world, you had to use many different, specialized programs: - Monocular models for guessing depth. - Multi-view models for triangulation. - SLAM systems for tracking movement. These systems rarely talked to each other well. You ended up with complex, brittle pipelines. The New Way (Depth Anything 3) The researchers asked: Can we just use one standard AI brain to do it all? The answer appears to be Yes. What it does: It takes any visual input—a single photo, a stereo pair, or a full video stream—and recovers the full 3D visual space. Why it’s different: It’s Simple: Instead of custom engineering, they used a standard Vision Transformer (DINOv2). One Goal: It predicts “Depth” (how far) and “Rays” (what direction) for every pixel. That’s it. Any Input: It adapts automatically, whether you give it 1 image or 100 images. Why This Matters for Robotics, Humanoids, AV, drones, AR/VR, SLAM, and mapping? - Simplicity = Speed: We can stop building fragile, multi-stage pipelines. One model handles the heavy lifting. - Robustness: Because it works on “any view,” it’s much more stable when your robot moves or when cameras get occluded. - Foundation Model: We are moving away from “specialized tools” toward “foundation models” for 3D geometry. Paper: https://lnkd.in/eBJ9-tjs Project webpage: https://lnkd.in/e_Ez6gYJ Models: https://lnkd.in/epw6nJRh Demos: https://lnkd.in/esVPs9DC Code: https://lnkd.in/ebD8bHbT ------------------------------------- Check out Udacity’s Online Master’s in AI [see comment]: https://lnkd.in/eTMFKN72
John Giannandrea's AI Leadership Journey to Apple SVP | Sina Pourghodrat (PhD) posted on the topic | LinkedIn
John Giannandrea’s journey to becoming Apple’s SVP of Machine Learning and AI Strategy is a masterclass in compounding experience, placing long-term bets on frontier tech, and staying close to the hardest problems in computing. ------------------------------------ This is Post #3 in my ongoing series breaking down the real career paths of tech VPs — see Post #2 on VP education here [ https://lnkd.in/eDG-VYn7 ]. ------------------------------------ For anyone aiming at VP-level roles, his path shows that depth in one domain—paired with repeatedly scaling your impact—can be far more powerful than chasing titles. 𝗪𝗵𝗼 𝗝𝗼𝗵𝗻 𝗚𝗶𝗮𝗻𝗻𝗮𝗻𝗱𝗿𝗲𝗮 𝗜𝘀 He’s a Scottish software engineer who studied Computer Science at the University of Strathclyde. 𝗘𝗮𝗿𝗹𝘆 𝗙𝗼𝘂𝗻𝗱𝗮𝘁𝗶𝗼𝗻 𝗬𝗲𝗮𝗿𝘀 Before entering executive roles, John spent years as a hands-on engineer at pioneering companies like General Magic, working on early mobile and communication technologies. These experiences grounded him in systems, networks, and real-world product constraints long before “AI leadership” was a defined career path. 𝗘𝗻𝘁𝗿𝗲𝗽𝗿𝗲𝗻𝗲𝘂𝗿𝗶𝗮𝗹 𝗕𝗲𝘁𝘀 He co-founded Tellme Networks, focused on voice-driven services, and later Metaweb Technologies, which built a knowledge graph of entities and relationships. Both startups sat at the intersection of data, language, and user behavior—the exact foundation modern AI systems rely on today. 𝗦𝗰𝗮𝗹𝗶𝗻𝗴 𝗜𝗺𝗽𝗮𝗰𝘁 𝗮𝘁 𝗚𝗼𝗼𝗴𝗹𝗲 After Google acquired Metaweb, John spent about eight years at the company, eventually leading Machine Intelligence, Research, and Search. He unified advanced AI research with core products—Search, Google Assistant, and others—turning machine learning into a company-wide capability rather than an isolated function. 𝗦𝘁𝗲𝗽 𝗜𝗻𝘁𝗼 𝗔𝗽𝗽𝗹𝗲’𝘀 𝗘𝘅𝗲𝗰𝘂𝘁𝗶𝘃𝗲 𝗧𝗲𝗮𝗺 In 2018, Apple recruited him to lead Siri and the company’s broader AI strategy. Shortly after, he joined the executive team. 𝗧𝗮𝗸𝗲𝗮𝘄𝗮𝘆𝘀 𝗳𝗼𝗿 𝗔𝘀𝗽𝗶𝗿𝗶𝗻𝗴 𝗩𝗣𝘀 Start as a builder: His career began with deep technical work, not management roles. Bet on waves early: Voice, knowledge graphs, and search became central to AI years after he invested in them. Show you can scale: From co-founding startups to leading Google’s largest technical teams, he proved execution at every level. Own mission-critical problems: At Apple, AI isn’t peripheral—it’s central to the product roadmap, and that’s where he chose to lead. ----------------------------------------------------------------- Check out Udacity’s Online Master’s in AI [see comment]: https://lnkd.in/eTMFKN72

Paid Members Benefits:

List of Benefits for Paid Members

Access your benefit if you are a paid member


Share the questions you were asked during your interview to help us build a database of robotics interview questions**+

+Contributors get free access to the final database.

Interview Questions - Robotics
We’re collecting technical, behavioral, and case-based questions to create a resource for job seekers in the robotics field. While this resource may be monetized in the future, contributors will receive full access to the database for free as a thank-you for their support if an email is provided in this form (optional). Your input will remain anonymous unless you choose otherwise. Thank you for helping build this valuable resource! Note: Please do not share any interview questions or content that is explicitly marked as confidential or proprietary, or that you believe might violate any non-disclosure agreements or company policies. By submitting this form, you confirm that the information provided is not subject to any such restrictions.

Share your total compensation details to help build a transparent resource for robotics professionals*+

Total Compensation - Robotics
Share your total compensation details (base salary, bonuses, RSUs) to help build a transparent resource for robotics professionals. Submissions are anonymous and will help others understand industry standards. By submitting, you confirm that your information doesn’t violate any confidentiality agreements or policies. Contributors get free access to the final database—thank you for your support! (you can email me separately after you submitted this form: robotixwithsina@gmail.com)

*Submissions can be anonymous. By submitting, you confirm that your information doesn’t violate any confidentiality agreements or policies. 

+Contributors get free access to the final database.

⚠️ This Newsletter issue is sponsored by Udacity. This page includes an affiliate link to support my work at no extra cost to you, but you’re never obligated to use it