It is important to understand that all scales drift, if for no other cause than change in temperature but there are other causes that can cause a scale indicated weight to drift in use.
As for the drift that people see after powering up a scale the cause is the change in temperature of the electronics and initially can be quite noticeable . That is why most if not all manufactures recommend a warmup period. This drift occurs primarily due to electronics and is independent of whether the scale is a load cell, force balance, balance beam or any other technology. Once the temperature stabilizes the drift will subside. The drift does not typically go to zero but varies due to minor variations in temperature which effect the very high gain electronics. When it appears the scale no longer drifts this assumption is based on the fact that any drift only appears when it exceeds the display resolution. In a 0.1gr scale that's 0.1 gn. On a 0.02gn scale that's 0.02 grains. If you consider a Fx-120i, which is often referred to as a 0.02gn accuracy scale, drift can add a 0.019gn bias error that the operator cannot see. The 95% repeatability (based on 0.001g SD) this scale is 0.03gn. This gives a repeatability of 0.03gn + 0.02gn (resolution) so the is 0.05 gn repeatability. If the scale is not re-zeroed prior to every measurement the error can become 0.069 gn. Essentially, undetected, uncorrected scale drift can double the error associated with the scale resolution. In this case 0.02gn becomes 0.04gn.
It is good laboratory practice to re-zero the scale before each weighing. In fact the following is from Section 3.2 of the AnD manual:
"Press the RE-ZERO key before each weighing to eliminate possible errors."
There are other error that re-zeroing eliminate. One of the most common is the buildup of skin oil and dirt that accumulates on the weighing pan when handled with bare hands. Some scales will also drift under repeated use due to hysteresis. Re-zeroing a scale does not affect its calibration.
Amazingly, usually higher cost scales/balances drift less than lower cost scales. This is usually due to quality of the components used and the possible use of different circuitry to minimize the drift.
Yep, all true. My fx120i moves around the most during the first 30-45 minutes or so of on time while it's warming up. I won't use it or check calibration until it's had at least 30 minutes to warm up and stabilize. A stable environment and clean power help keep things in check after the warmup period.
Stronger zero tracking helps with drift as it will automatically rezero the scale for you when the pan is stable within a couple digits of zero, provided you aren't trying to weigh a very small sample near the minimum resolution of the balance where it would interpret the tiny sample weight as zero drift. But as you mentioned, frequent manual rezeroing is best practice, and that goes for any balance. With an automatic reloader you could incorporate logic into the code to send a manual rezero command to the balance every time the powder cup is placed on the bed and the balance is reading say +/- 0.003 grams indicating the cup is empty. This would slightly slow the dispensing process as you would have to wait for the stable zero weight, then the rezero command to be sent and the scale to rezero, but it would aid in maintaining a good zero without bias drift below the display resolution of the balance.
If you wanted things to be faster the code could be set to send the manual rezero command every 5 or 10 charges, assuming the drift and bias error won't grow to unacceptable levels over the time it takes to dispense and weigh 5-10 charges.
The biggest benefit to a higher resolution sartorius over the more common fx120i IMO isn't the better resolution allowing you to weigh down to less than a single kernel of powder, but the tighter standard deviation and repeatability it offers. The fx120i has an advertised repeatability of 0.001g, while the higher end sartorius is an order of magnitude better at 0.0001g. Thus you can be more confident of the powder charge being weighed exactly to 1 kernel on the sartorius with typical stick powders, while on the fx120i you may be within a +/- 1 kernel span. Can you see that difference in charge weight certainty on your chronograph or target? You would have to do a lot of very controlled testing to determine that, and with variations in brass, bullets, primers, barrel wear and cleanliness, chamber and barrel temp, powder moisture content, and environmental conditions IMO it would be hard to conclusively prove that an exact to the kernel charge is more accurate on target than a +/- 1 kernel charge being you can't control all the other variables in the test with 100% confidence. Still, it is another variation in the process you can minimize, and a sartorius or similar will give you to the kernel confidence in your charge weight. Is it worth the extra money over an fx120i that provides +/- 1 kernel confidence in your charge weights with typical stick powders? That's up to the user.
Even if you spend $1000+ on a sartorius you'll still search for something to blame when you throw a flyer on the target though, lol.