• 0 Posts
  • 94 Comments
Joined 1 year ago
cake
Cake day: June 9th, 2023

help-circle


  • The data are stored, so it’s not a live-feed problem. It is an inordinate amount of data that’s stored though. I don’t actually understand this well enough to explain it well, so I’m going to quote from a book [1]. Apologies for wall of text.

    “Serial femtosecond crystallography [(SFX)] experiments produce mountains of data that require [Free Electron Laser (FEL)] facilities to provide many petabytes of storage space and large compute clusters for timely processing of user data. The route to reach the summit of the data mountain requires peak finding, indexing, integration, refinement, and phasing.” […]

    "The main reason for [steep increase in data volumes] is simple statistics. Systematic rotation of a single crystal allows all the Bragg peaks, required for structure determination, to be swept through and recorded. Serial collection is a rather inefficient way of measuring all these Bragg peak intensities because each snapshot is from a randomly oriented crystal, and there are no systematic relationships between successive crystal orientations. […]

    Consider a game of picking a card from a deck of all 52 cards until all the cards in the deck have been seen. The rotation method could be considered as analogous to picking a card from the top of the deck, looking at it and then throwing it away before picking the next, i.e., sampling without replacement. In this analogy, the faces of the cards represent crystal orientations or Bragg reflections. Only 52 turns are required to see all the cards in this case. Serial collection is akin to randomly picking a card and then putting the card back in the deck before choosing the next card, i.e., sampling with replacement (Fig. 7.1 bottom). How many cards are needed to be drawn before all 52 have been seen? Intuitively, we can see that there is no guarantee that all cards will ever be observed. However, statistically speaking, the expected number of turns to complete the task, c, is given by: where n is the total number of cards. For large n, c converges to n*log(n). That is, for n = 52, it can reasonably be expected that all 52 cards will be observed only after about 236 turns! The problem is further exacerbated because a fraction of the images obtained in an SFX experiment will be blank because the X-ray pulse did not hit a crystal. This fraction varies depending on the sample preparation and delivery methods (see Chaps. 3–5), but is often higher than 60%. The random orientation of crystals and the random picking of this orientation on every measurement represent the primary reasons why SFX data volumes are inherently larger than rotation series data.

    The second reason why SFX data volumes are so high is the high variability of many experimental parameters. [There is some randomness in the X-ray pulses themselves]. There may also be a wide variability in the crystals: their size, shape, crystalline order, and even their crystal structure. In effect, each frame in an SFX experiment is from a completely separate experiment to the others."

    The Realities of Experimental Data” "The aim of hit finding in SFX is to determine whether the snapshot contains Bragg spots or not. All the later processing stages are based on Bragg spots, and so frames which do not contain any of them are useless, at least as far as crystallographic data processing is concerned. Conceptually, hit finding seems trivial. However, in practice it can be challenging.

    “In an ideal case shown in Fig. 7.5a, the peaks are intense and there is no background noise. In this case, even a simple thresholding algorithm can locate the peaks. Unfortunately, real life is not so simple”

    It’s very cool, I wish I knew more about this. A figure I found for approximate data rate is 5GB/s per instrument. I think that’s for the European XFELS.

    Citation: [1]: Yoon, C.H., White, T.A. (2018). Climbing the Data Mountain: Processing of SFX Data. In: Boutet, S., Fromme, P., Hunter, M. (eds) X-ray Free Electron Lasers. Springer, Cham. https://doi.org/10.1007/978-3-030-00551-1_7



  • He doesn’t directly control anything with C++ — it’s just the data processing. The gist of X-ray Crystallography is that we can shoot some X-rays at a crystallised protein, that will scatter the X-rays due to diffraction, then we can take the diffraction pattern formed and do some mathemagic to figure out the electron density of the crystallised protein and from there, work out the protein’s structure

    C++ helps with the mathemagic part of that, especially because by “high throughput”, I mean that the research facility has a particle accelerator that’s over 1km long, which cost multiple billions because it can shoot super bright X-rays at a rate of up to 27,000 per second. It’s the kind of place that’s used by many research groups, and you have to apply for “beam time”. The sample is piped in front of the beam and the result is thousands of diffraction patterns that need to be matched to particular crystals. That’s where the challenge comes in.

    I am probably explaining this badly because it’s pretty cutting edge stuff that’s adjacent to what I know, but I know some of the software used is called CrystFEL. My understanding is that learning C++ was necessary for extending or modifying existing software tools, and for troubleshooting anomalous results.





  • I’m still a relative noob with Linux and I find stuff “breaks” more on Linux (‘breaks’ as in does something I don’t want it to), nursing and it can take me a while to fix those things because I’m still learning. It takes a while in part because I want to actually understand what’s going wrong (and how to fix it), rather than just doing the thing.

    With Windows, when it’s doing something I don’t want it to, it’s usually a much more straightforward troubleshooting process because often, it’s a problem I can’t solve. The stuff I can change is quicker because I have more experience with Windows, but overall, the experience is much more frustrating because of all the stuff I need to tolerate. It makes it feel like my computer isn’t my own.


  • I’m getting real tired of invoking Cory Doctorow’s concept of “enshittification” , but if the shoe fits… ¯\_ (ツ)_/¯

    Enshittification is actually a really useful lens to apply here because late stage enshittification involves the company fucking over its business users, and I’m increasingly seeing that with Amazon. I read a great example recently: apparently a small independent reusable diaper business almost went out of business because of relying on Amazon for fulfillment and logistics: a customer had received a used diaper and was (justifiably) horrified and posted this on social media. It seems that someone else purchased a diaper, used it, and then returned it via Amazon, who then sent it out as new without checking it. Besides just not using Amazon for order fulfillment, there’s nothing the business could’ve done to prevent this, so it sucks that their reputation suffered so much for Amazon’s fuck-up.

    Then there’s also the way that Amazon used data from sellers on its platform to create their Amazon Basics range, and then outcompeted those same sellers using its platform advantage.

    I genuinely wonder how much longer it can go on for. The only remaining stage of enshittification that Amazon is yet to do is dying, but that feels long overdue. I haven’t checked, but I wouldn’t be surprised if Amazon Web Services is propping up the rest of their business.