techhub.social: About · Status · Profiles directory · Privacy policy
Mastodon: About · Get the app · Keyboard shortcuts · View source code · v4.4.1
Google Cloud Managed Lustre, a managed, high-performance parallel file system service, announced by Google and DDN
https://www.admin-magazine.com/News/Google-Cloud-Managed-Lustre-Now-Generally-Available?utm_source=mam
#HPC #EXAscaler #GoogleCloud #data #simulation #research #DDN
Announcing Python-Blosc2 3.6.1
!Unlock new levels of data manipulation with Blosc2!
We've introduced a major improvement: powerful fancy indexing and orthogonal indexing for Blosc2 arrays.
We've tamed the complexity of fancy indexing to make it intuitive, efficient, and consistent with NumPy's behavior.
Read all about it on our blog! https://www.blosc.org/posts/blosc2-fancy-indexing/
Compress Better, Compute Bigger!
#HPC Leadership scores so far
Q1: 97% of people got it correct
Q2: 76%
Q3: 73%
Q4: 84%
Q5: 89%
Q6: 88%
Q7: 65%
Q8: 78%
Q9: 90%
Q10: 70%
The most selected wrong answer (33% of people) is for Q7.
Nearly 20% of people got 10/10.
Still nobody who said they don't need the course got 10/10.
They master various AI challenges, are up to today's high-performance computing requirements and are very much in vogue right now – our GPU servers. Whether it's a large language model (LLM), machine learning, rendering or computer-aided design (CAD): GPU servers can handle whatever AI demands of them.
If you want to experience the power of GPU servers, you can find all the information you need here on our website: https://nine.ch/products/gpu-server/
#gpu #server #ai #hpc #managed #hypervisor #nine
I am a sucker for photos of cool #HPC infrastructure, and here is a dense GB200 NVL72 cluster going up somewhere in Canada (I think). Impressive to see this many racks in row; the DC must have facility water which is still uncommon in hyperscale. Source: https://www.linkedin.com/posts/5cai_heres-a-peek-behind-the-curtain-at-the-early-activity-7350949703842189313-X3p_?utm_source=share&utm_medium=member_desktop&rcm=ACoAAAU98N0BKzpkHNnW4i2sDGnIDTwgK7pQHXc
Our order just got submitted for a new 8XH200 GPU node for our HPC cluster and internal LLM platform. Each of the 8 H200 GPUs has 141GB - that's going to let us run much bigger models, so psyched. Not going to lie, getting first access to all the expensive toys is one of my favorite perks of the job!
DataCenter-Insider beschäftigt sich mit dem geplanten, kommenden #Supercomputer des LRZ und beschreibt die Technik von NVIDIA: https://www.datacenter-insider.de/lrz-plant-supercomputer-blue-lion-mit-nvidia-vera-rubin-a-da7e952a8d85ca64297ec6e76d0f440e/
Mastodon is the best way to keep up with what's happening.
Follow anyone across the fediverse and see it all in chronological order. No algorithms, ads, or clickbait in sight.
Create accountLogin