Precision Rifle Gear New Athlon Rangecraft Chronograph-Garmin Xero Killer?

That is the correct app. In the “hamburger” menu at top right, you can select “connect device,” and go through a couple steps (within the settings app to connect to the device via Bluetooth, and through the Athlon Ballistics app) to connect the device to your phone. Once connected, again in the “hamburger,” click “manage devices,” which will allow you to open the chronograph data logger and firmware update interface. Typically the data will sync over automatically, if not, simply click the “sync data” button. You can update firmware in this interface as well, if a new firmware update is available (likely is, if you have not yet updated after purchase).
Thank you, sir :)
 
  • Like
Reactions: Varminterror
The communicated "precision" for the Garmins, as well as the LabRadar LX's and the Athlons, is supposed to be +/-0.1% for rifle cartridges (I'm not terribly certain why the precision specification would quadruple for slower projectiles, other than a potential for lower integrity frequency shift - but in theory, the units should be able to hit the projectile more times and do better interpolating velocities). "within 5fps on 2999" would be wider than +/-0.1% by almost double.

But... Given hundreds of rounds in testing with 6 or more chronographs operating side by side, and comparing the disparity between the two units of each brand, and between the units themselves, after ALL of those rounds fired and compared, Garmin is the only brand which could POSSIBLY be within their published specification for accuracy.

How I'm comfortable stating this - example: say I have a true velocity of 2807.0fps. One unit, landing within +/-0.1% of that, +/-2.8fps, could read 2809.8 as the absolutely highest reading which could still be within the specified accuracy, while another unit could read 2804.2fps, as the absolutely slowest reading which could still be within that specified accuracy. In that case, if I believe both are telling the truth, then I could know the 2807.0 is the true velocity, because no other velocity could be shared by the two while still achieving their published specifications. But more realistically, if the two units EVER displayed anything larger than 2x 2.8fps apart, larger than 5.6fps spread between the two units, then at least ONE unit MUST be reporting outside of the published specification for accuracy, in other words, a spread of larger than 5.6 between two units would suggest at least ONE unit is failing to achieve the specified accuracy to truth. In multiple 30, 50, and 100rnd experiments, the Garmins are the only units which have consistently reported velocities close enough together to have both units within the specified accuracy. The LabRadar LX's have been loosely just outside of their published accuracy on average with the max spread doubling the acceptable band, and the Athlons have been MUCH farther outside of their published specification for accuracy - with the Average difference between the two units often being farther apart than the specification would allow.

So this is a non-definitive method, such that ALL of them could be wrong, relative to true velocity, however, in my testing so far, with hundreds of rounds fired, the Garmins have been the only units which have the potential to actually be right. The Garmins MIGHT be reliably within their published spec for accuracy to truth, but by default, I can prove at least one, if not both, of the other brand units are NOT close enough together for them to both be within spec of truth.
Add to that the fact that the radars are probably interfering with each other to some degree when using more than one radar to measure so creating a precise direct side by side for the overage user/youtuber seems out of reach
 
Add to that the fact that the radars are probably interfering with each other to some degree when using more than one radar to measure so creating a precise direct side by side for the overage user/youtuber seems out of reach
I think that it’s been shown ( Varminterror? ) that interference from units in close proximity is or can be a reality.
 
Add to that the fact that the radars are probably interfering with each other to some degree when using more than one radar to measure so creating a precise direct side by side for the overage user/youtuber seems out of reach
I think that it’s been shown ( Varminterror? ) that interference from units in close proximity is or can be a reality.

What turkeytider said.

Plus Dustin of Athlon has stated on video he observed Rangecraft chrono's interfering with each other during their own testing.

Difference seems to be Garmin put in extra effort to defeat this interference and Athlon said "probably won't happen much at a real range, it's good enough".

I own the Athlon and it serves my needs adequately for ~$200 less than the typical Xero price. Mostly hunting/subsonic applications.

If I was using it for competition purposes, especially where more radars might be used simultaneously, I'd probably spring the extra money for the Garmin.
 
Tested athlon side by side with Labradar original. Didn't miss a shot but did average exactly 15 fps faster over 20 shots.

This has been a relatively common, recurring experience for me, after several sessions repeating side by side comparisons for hundreds of rounds.

Add to that the fact that the radars are probably interfering with each other to some degree when using more than one radar to measure so creating a precise direct side by side for the overage user/youtuber seems out of reach

“Kind of interfering” doesn’t happen in this partial fashion - at least for the adjacent radars. Either we have co-channel interference or we don’t. Meaning the interfering units are emitting on the same channel such each unit can’t tell what is their signal or is that of the opposing unit. This is co-channel interference, such the unit gets confused by a premature or retarded echo within their echo period, which did not originate from that respective unit. So for the Athlon, we can see continuous false triggers, even false readings for shots which did not happen, and incorrect velocities when shots ARE recorded. But these - not coincidentally - are not “wrong” by only 10-20fps, they’re typically wrong by hundreds or thousands of feet per second. The radar triggered Garmin and Athlon units give away co-channel interference very reliably, as they will offer false triggers, indicating to the shooter that interfering radar is reflecting from the field. LabRadar and VelociRadar hide interference behind their recoil and acoustic triggers, so it’s a little harder to detect.

I can also state directly, in testing large sample strings to prove out statistically confirmable averages and SD/ES’s, when I have shot only ONE chronograph at a time during these tests, ONLY the Athlon units have persisted the 10-20fps high readings, even when they were the only radar chronograph operating within miles. It is intermittent, and doesn’t seem to have any rhyme nor reason as to what days it will decide to run higher readings vs. correct readings.

I think that it’s been shown ( Varminterror? ) that interference from units in close proximity is or can be a reality.

I’ve tested this relatively extensively, including an initialization protocol which I used for my side by side testing, and which of my single channel or assignable channel units have to be either reassigned or not shot together.
 
Which of the several systems is the “ standard “ against which determinations can be made as to which is TRULY accurate?

There isn't a verified one and if anyone read through the entire thread the evidence doesn't stack up phenomenally in favor of shot to shot consistency for the Athlon.

In Theory Garmin tests their units against a "calibrated standard" mentioned in their documentation but nowhere is that standard identified.
 
  • Like
Reactions: LastShot300
There isn't a verified one and if anyone read through the entire thread the evidence doesn't stack up phenomenally in favor of shot to shot consistency for the Athlon.

In Theory Garmin tests their units against a "calibrated standard" mentioned in their documentation but nowhere is that standard identified.
Yeah? Where is the proof that Athlon doesn't calibrate?
 
Yeah? Where is the proof that Athlon doesn't calibrate?

DuH Hur hur HuR ... wow dude ... you totally got me ... you must be the smartest person in the thread.

I never said they (Rangecraft) didn't. I haven't seen any claim they (Rangecraft) do it in their documentation.

I didn't even offer any PrOOf Garmin actually tests against a calibrated device. I said they (Garmin) mention it in their documentation.

"Proof" ... :ROFLMAO: :ROFLMAO: :ROFLMAO: :ROFLMAO:

I own one, a Rangecraft Athlon in case I need to spell it out. It will be good enough for my needs. One day I may compare it to my old Beta Chrony but for now Varminterror 's data is good enough for me.
 
Which of the several systems is the “ standard “ against which determinations can be made as to which is TRULY accurate?
There have been several reports of the Athlon consistently giving a higher FPS when tested against other chronys including Magnetto Speeds, Lab Radars, and Xeros. I guess the other three can be wrong, but I am going to stick with the probable odds the Athlon is the unit giving inaccurate readings. I have heard the Athlon units are getting better with the newer firmware updates.
That's a pretty biased comment. There are no accuracy issues. Sounds like you may have some sort of vested bias.
I have no dog in the fight just stating what has been reported.
 
There have been several reports of the Athlon consistently giving a higher FPS when tested against other chronys including Magnetto Speeds, Lab Radars, and Xeros. I guess the other three can be wrong, but I am going to stick with the probable odds the Athlon is the unit giving inaccurate readings. I have heard the Athlon units are getting better with the newer firmware updates.

I have no dog in the fight just stating what has been reported.

I think that is solid but flawed thinking. One could argue that the last one to the table has the most updated software and learned lessons from the others. I am not saying that is what happened.

I trust Labradar and Magnetospeed about as much as I do letting someone drop hammer my hand from 2’. I really have little respect for either company. As for Garmin, I use Garmin's 2025 Instinct3 watch for sleep detection and it is ludicrously poor compared to my much cheaper Fitbit. It is really unbelievable how bad it is. Garmin should hang their head in shame reusing that outdated tech.

We need Brian Litz to settle this with his advanced radar systems.
 
  • Like
Reactions: Steel+Killer
CA0BB873-9DD4-4AA1-AE04-D777C5BC0BEB.jpeg


I’ll say it again. I should have ditched my LR long ago and got one of these mini radars! Everyone should have one in their kit.
 
As for metrology standard (bench standard, if you will), it wouldn't surprise me if Garmin, Litz, etc uses Infinition very high end doppler radars.

For those that don't know, Infinition owns Labradar and their large (not home use) radars are their primary business....it always seemed to me that the the shooter market (Labradar) was an afterthought and that this was reflected in their awful support and updates of that product.

1756570837703.jpeg


 
I guess the Athlon unit may be good for consistency, but not accuracy of the velocity.
That's a pretty biased comment. There are no accuracy issues. Sounds like you may have some sort of vested bias.

I can state directly, and have shared the datasets publicly, which suggest both consistency and accuracy compared to the other brands is a problem for the 3 Athlon units I have used, 2 of which have been tested for thousands of rounds side by side with multiple units of the other brands. About 30% of the time, I have a high or low offset demonstrated by the Athlons of +/- teens to 20's fps higher or lower than the other units and from pre-established averages for the load/batch (strangely tracking together in each instance) while the LabRadar LX's and Garmin C1's continue tracking together tightly. Equally, the Athlon data sets show higher volatility within the multi-shot strings than the other brands in almost every instance of comparison, even when the offset isn't occurring. I've also noted, as I described above in this thread, that when we look at the potential as to whether the units of each brand could possibly be reading within their +/-0.1% specification, ONLY the Garmins have achieved this reliably, with the LabRadar LX's slightly outside of that possibility, and the Athlons always spreading apart from one another by more than 2x the specified precision. This isn't a perfect system of direct measurement, but it's relatively simple to acknowledge that if each unit must be within 0.1% of truth, then 2 units can never be more than 0.2% apart, and as I stated, ONLY the Garmin units have achieved that expectation (3 different garmin units included in the testing, being deployed in pairs). LabRadar LX's have been close on most side by side tests, but slightly outside of that standard (occasionally much higher), meaning one or both MUST be greater than 0.1% "wrong" in their reading since the two are more than 0.2% apart, but the Athlons have frequently exhibited max spreads between the units 2-4x larger than it should be - even the Average spread between the 2 units has been greater than 2x the specified precision for some tests.

I have no "vested bias," and I've shared every data point within these datasets publicly as they were compiled, so there's really no ability to point fingers and cast aspersions against these results. My data sets I've shared have hundreds of data points which reflect both accuracy and consistency issues.
 
How bout the system that matches what's happening down range?
I was going to mention this also.

Whichever unit is agreeing with (pick your ballistic app) would be another indicator of which unit is more accurate.

If post #323 doesn't convince you the Garmin is worth twice the price well you are in denial IMO....LOL

"You don't always get what you pay for, but you always pay for what you get"
 
Last edited:
I can state directly, and have shared the datasets publicly, which suggest both consistency and accuracy compared to the other brands is a problem for the 3 Athlon units I have used, 2 of which have been tested for thousands of rounds side by side with multiple units of the other brands. About 30% of the time, I have a high or low offset demonstrated by the Athlons of +/- teens to 20's fps higher or lower than the other units and from pre-established averages for the load/batch (strangely tracking together in each instance) while the LabRadar LX's and Garmin C1's continue tracking together tightly. Equally, the Athlon data sets show higher volatility within the multi-shot strings than the other brands in almost every instance of comparison, even when the offset isn't occurring. I've also noted, as I described above in this thread, that when we look at the potential as to whether the units of each brand could possibly be reading within their +/-0.1% specification, ONLY the Garmins have achieved this reliably, with the LabRadar LX's slightly outside of that possibility, and the Athlons always spreading apart from one another by more than 2x the specified precision. This isn't a perfect system of direct measurement, but it's relatively simple to acknowledge that if each unit must be within 0.1% of truth, then 2 units can never be more than 0.2% apart, and as I stated, ONLY the Garmin units have achieved that expectation (3 different garmin units included in the testing, being deployed in pairs). LabRadar LX's have been close on most side by side tests, but slightly outside of that standard (occasionally much higher), meaning one or both MUST be greater than 0.1% "wrong" in their reading since the two are more than 0.2% apart, but the Athlons have frequently exhibited max spreads between the units 2-4x larger than it should be - even the Average spread between the 2 units has been greater than 2x the specified precision for some tests.

I have no "vested bias," and I've shared every data point within these datasets publicly as they were compiled, so there's really no ability to point fingers and cast aspersions against these results. My data sets I've shared have hundreds of data points which reflect both accuracy and consistency issues.


So it sounds like from your testing we should consider the Garmin the gold standard. Well damn done…..👍



D9809F32-42CB-41E5-AE40-6DF3850C35A0.jpeg
 
I can state directly, and have shared the datasets publicly, which suggest both consistency and accuracy compared to the other brands is a problem for the 3 Athlon units I have used, 2 of which have been tested for thousands of rounds side by side with multiple units of the other brands. About 30% of the time, I have a high or low offset demonstrated by the Athlons of +/- teens to 20's fps higher or lower than the other units and from pre-established averages for the load/batch (strangely tracking together in each instance) while the LabRadar LX's and Garmin C1's continue tracking together tightly. Equally, the Athlon data sets show higher volatility within the multi-shot strings than the other brands in almost every instance of comparison, even when the offset isn't occurring. I've also noted, as I described above in this thread, that when we look at the potential as to whether the units of each brand could possibly be reading within their +/-0.1% specification, ONLY the Garmins have achieved this reliably, with the LabRadar LX's slightly outside of that possibility, and the Athlons always spreading apart from one another by more than 2x the specified precision. This isn't a perfect system of direct measurement, but it's relatively simple to acknowledge that if each unit must be within 0.1% of truth, then 2 units can never be more than 0.2% apart, and as I stated, ONLY the Garmin units have achieved that expectation (3 different garmin units included in the testing, being deployed in pairs). LabRadar LX's have been close on most side by side tests, but slightly outside of that standard (occasionally much higher), meaning one or both MUST be greater than 0.1% "wrong" in their reading since the two are more than 0.2% apart, but the Athlons have frequently exhibited max spreads between the units 2-4x larger than it should be - even the Average spread between the 2 units has been greater than 2x the specified precision for some tests.

I have no "vested bias," and I've shared every data point within these datasets publicly as they were compiled, so there's really no ability to point fingers and cast aspersions against these results. My data sets I've shared have hundreds of data points which reflect both accuracy and consistency issues.
V., I know that you’ve shared these results with Athlon. Would be useful to know how they responded, if indeed they have. Looks like everyone who purchased an Athlon chronograph got screwed.
 
  • Like
Reactions: doubloon
V., I know that you’ve shared these results with Athlon. Would be useful to know how they responded, if indeed they have. Looks like everyone who purchased an Athlon chronograph got screwed.
Upon reflection concerning Varminterror’s excellent work, I find myself wondering if the degree of apparent lack of accuracy and repeatability with the Athlon is more or less problematic depending upon the type of shooter and their data requirements. Are the requirements the same for a recreational, non-hand loading target shooter as they are for a competition shooting, advanced hand loader? On the one hand you might have someone trying to determine which factory load is the “ least bad “ when it comes to consistency and hopefully able to get an average MV that’s at least is more accurate than the number on the box. On the other hand, you’ve got someone who concerns themselves with each an every grain of powder and seating depths varying in extremely small amounts with corresponding impacts on performance. Just a question.
 
@Turkeytider - Addressing your 2 posts here based on the 2 posed or implied questions:

1) How has Athlon responded to these results?
2) Can the Athlon work for some shooters as it is?

I'll try to be succinct as possible here:

1) How has Athlon responded to my results? First off, reiterating, I'm not affiliated with any of these companies. I bought my first Garmin and Athlon units, and I won my LabRadar LX at Ko2M - but would have bought one if I hadn't won it. My "second unit" of each of these brands has been loaned to me either by the companies or their reps, just to help me avoid long-term borrowing units from other shooters and enable improved data capture opportunities (such as the Static vs. Gun Mounted Velocity Test I shared), really just out of good faith collaboration.

Second, and really emphasizing that reiteration, I really won't pretend I'm materially important to anything Athlon is doing, or Garmin or LabRadar. As I approached this comparison test, I reached out to each of these companies and asked if there were any ways I would fuck up and misrepresent the untis, or any ways I could unfairly bias the test, and if there were any specific features they felt made their brand/model stand out. Most companies were interested in discussing the test, but again, I'm just a peckerwood who lives in the hills with too many guns, and all I did was make a few phone calls - the companies just want to support the tribal knowledge of our universe, so they were willing to help out in good faith and loan the demo units.

With that out of the way... Regarding how I see Athlon responding to user feedback, including my own: Specifically, I contacted Athlon because 2 units I had on hand were reading incremental offsets when compared to other chronographs. Athlon offered troubleshooting processes for temporary solutions, and then within a week, I received a text that they were issuing a firmware update the next week which should fix the issue - and it DID improve the occurrence from nearly 100% of sessions down to only 1/3 or 1/4 of sessions. Equally, I had connectivity and sync issues with my 3rd Athlon unit, and their response was again, advice for immediate troubleshooting processes - which did not work - but then after about a month of manually entering hundreds of data points, a firmware update solved that problem. I do know other users are having issues, and I know Athlon is still working on some - it seems the connectivity/sync issues are solved for almost all iOS users, but still not quite ironed out for Android/Samsung phone users, so that's still work in progress. All of that is to say that I have seen firsthand that Athlon is offering CS to productively support user issues, and that they are making incremental firmware updates to their app and to their unit software to improve reliability.

I also acknowledge, as objectively critical as I am of ALL of these units, the Athlon Velocity Pro simply is younger in market and is racing to catch up. LabRadar and Garmin have the luxury of having already survived many of these growing pains because they already have years of head start in this practice - LabRadar had the V1 on the market for a decade before these others, and Garmin has had multiple Radar devices on the market for a few years before the C1, and has had dozens of bluetooth communicating devices on the market for over a decade. So it's reasonable to expect that Athlon would have more CURRENT issues as they are sprinting through their learning curve to catch up - a year or two from now, the expectation of nearly-perfect functional execution is more reasonable than it is today.

2) Can the Athlons work for some users as is? As I mentioned above - We're 7 pages deep into this thread, so I know my post was kind of lost in the depths at this point, but I did acknowledge in this thread, the Athlons can be fruitfully deployed by a great percentage of shooters. Who would be at risk, in my opinion, and how?

--> With the intermittent high/low offset issue: ELR shooters would be sensitive to the ~30fps spread of potential results I've seen. This is roughly +/-0.5% error from truth, so as good or better than any common optical chronograph from history, but still looser than we want. But an ELR shooter would sail above or sink below targets. A PRS shooter can likely honestly be just fine with this error, and a shooter with a careful sense of attention might simply restart their Athlon when they notice a marked high or low result for a known load.

--> Shooters watching closely velocity as a means of tracking barrel life or watching for carbon ring indicators might be tricked by the offset. Again, simply resetting the unit would correct or confirm the change, but this would be an extra step.

--> Someone trying to compare multiple loads spread over multiple days, where +/-0.5% might represent the difference between each load could be mislead into trusting incorrect relative results. Does that difference of 30fps really matter? eh, probably not. But if I'm trying to compare, say, two primers, or two lots of powder, or two different brands of cases, or worst of all, looking at one set of charge weights one day against any of the other parameters on the next day, that intermittent offset issue could bite me in the ass. BUT, if a shooter is sure to shoot both comparative samples on the same day, they'd be just fine.

--> An extension of the latter, but an incredibly small niche, if a shooter is trying to test for temperature sensitivity of their load, the intermittent offset issue could lead to false results in the correlation curve.

I don't think that the higher volatility I've seen with the units (inherent noise between readings of a string) is really of consequence for anyone. Over large sample sets, the SD's (strangely) seem to end up very similar, but when you look at the results as a trendline, the volatility is visibly higher... Inconsequential, overall, maybe I guess a guy with a Garmin could post online that display picture with 5 shots and an ES of 7 but the Athlon might read that same string with an ES of 10, but other than bragging online, that's really an inconsequential difference. It looks ugly to a data hound, but an application/field engineer recognizes noise as noise, and signal as signal.

I also don't think even the guys riding the ragged edge of Power Factors or velocity speed limits could get themselves in hot water, because the maximum offsets I'm seeing are in the 20's fps... Maybe if the absolutely unlikely coincidence of a guy measuring their load velocity at home and reading 20fps LOW then going to a match and being checked by a different Athlon which intermittently measured 20fps HIGH that day, they could swing 40fps and end up above the speed limit, but restarting EITHER unit would bring them both within 20fps of each other, so I really don't see that as a REAL issue.

And again, I do think the offset issue will be resolved by future firmware updates. It has been partially solved already. The volatility is still very tight, set up is fast and easy, nothing downrange, so they're always going to be superior to any optical chronograph, and most folks will find them easier to set up than Magnetospeeds, so ultimately, at sub $300, with a little higher volatility and some quirks compared to the $500 Garmin or LabRadar, it's still a great unit.

Maybe another statement which might mean a lot to folks - I'd rather buy an Athlon than a LabRadar V1 at the same price today.
 
@Turkeytider - Addressing your 2 posts here based on the 2 posed or implied questions:

1) How has Athlon responded to these results?
2) Can the Athlon work for some shooters as it is?

I'll try to be succinct as possible here:

1) How has Athlon responded to my results? First off, reiterating, I'm not affiliated with any of these companies. I bought my first Garmin and Athlon units, and I won my LabRadar LX at Ko2M - but would have bought one if I hadn't won it. My "second unit" of each of these brands has been loaned to me either by the companies or their reps, just to help me avoid long-term borrowing units from other shooters and enable improved data capture opportunities (such as the Static vs. Gun Mounted Velocity Test I shared), really just out of good faith collaboration.

Second, and really emphasizing that reiteration, I really won't pretend I'm materially important to anything Athlon is doing, or Garmin or LabRadar. As I approached this comparison test, I reached out to each of these companies and asked if there were any ways I would fuck up and misrepresent the untis, or any ways I could unfairly bias the test, and if there were any specific features they felt made their brand/model stand out. Most companies were interested in discussing the test, but again, I'm just a peckerwood who lives in the hills with too many guns, and all I did was make a few phone calls - the companies just want to support the tribal knowledge of our universe, so they were willing to help out in good faith and loan the demo units.

With that out of the way... Regarding how I see Athlon responding to user feedback, including my own: Specifically, I contacted Athlon because 2 units I had on hand were reading incremental offsets when compared to other chronographs. Athlon offered troubleshooting processes for temporary solutions, and then within a week, I received a text that they were issuing a firmware update the next week which should fix the issue - and it DID improve the occurrence from nearly 100% of sessions down to only 1/3 or 1/4 of sessions. Equally, I had connectivity and sync issues with my 3rd Athlon unit, and their response was again, advice for immediate troubleshooting processes - which did not work - but then after about a month of manually entering hundreds of data points, a firmware update solved that problem. I do know other users are having issues, and I know Athlon is still working on some - it seems the connectivity/sync issues are solved for almost all iOS users, but still not quite ironed out for Android/Samsung phone users, so that's still work in progress. All of that is to say that I have seen firsthand that Athlon is offering CS to productively support user issues, and that they are making incremental firmware updates to their app and to their unit software to improve reliability.

I also acknowledge, as objectively critical as I am of ALL of these units, the Athlon Velocity Pro simply is younger in market and is racing to catch up. LabRadar and Garmin have the luxury of having already survived many of these growing pains because they already have years of head start in this practice - LabRadar had the V1 on the market for a decade before these others, and Garmin has had multiple Radar devices on the market for a few years before the C1, and has had dozens of bluetooth communicating devices on the market for over a decade. So it's reasonable to expect that Athlon would have more CURRENT issues as they are sprinting through their learning curve to catch up - a year or two from now, the expectation of nearly-perfect functional execution is more reasonable than it is today.

2) Can the Athlons work for some users as is? As I mentioned above - We're 7 pages deep into this thread, so I know my post was kind of lost in the depths at this point, but I did acknowledge in this thread, the Athlons can be fruitfully deployed by a great percentage of shooters. Who would be at risk, in my opinion, and how?

--> With the intermittent high/low offset issue: ELR shooters would be sensitive to the ~30fps spread of potential results I've seen. This is roughly +/-0.5% error from truth, so as good or better than any common optical chronograph from history, but still looser than we want. But an ELR shooter would sail above or sink below targets. A PRS shooter can likely honestly be just fine with this error, and a shooter with a careful sense of attention might simply restart their Athlon when they notice a marked high or low result for a known load.

--> Shooters watching closely velocity as a means of tracking barrel life or watching for carbon ring indicators might be tricked by the offset. Again, simply resetting the unit would correct or confirm the change, but this would be an extra step.

--> Someone trying to compare multiple loads spread over multiple days, where +/-0.5% might represent the difference between each load could be mislead into trusting incorrect relative results. Does that difference of 30fps really matter? eh, probably not. But if I'm trying to compare, say, two primers, or two lots of powder, or two different brands of cases, or worst of all, looking at one set of charge weights one day against any of the other parameters on the next day, that intermittent offset issue could bite me in the ass. BUT, if a shooter is sure to shoot both comparative samples on the same day, they'd be just fine.

--> An extension of the latter, but an incredibly small niche, if a shooter is trying to test for temperature sensitivity of their load, the intermittent offset issue could lead to false results in the correlation curve.

I don't think that the higher volatility I've seen with the units (inherent noise between readings of a string) is really of consequence for anyone. Over large sample sets, the SD's (strangely) seem to end up very similar, but when you look at the results as a trendline, the volatility is visibly higher... Inconsequential, overall, maybe I guess a guy with a Garmin could post online that display picture with 5 shots and an ES of 7 but the Athlon might read that same string with an ES of 10, but other than bragging online, that's really an inconsequential difference. It looks ugly to a data hound, but an application/field engineer recognizes noise as noise, and signal as signal.

I also don't think even the guys riding the ragged edge of Power Factors or velocity speed limits could get themselves in hot water, because the maximum offsets I'm seeing are in the 20's fps... Maybe if the absolutely unlikely coincidence of a guy measuring their load velocity at home and reading 20fps LOW then going to a match and being checked by a different Athlon which intermittently measured 20fps HIGH that day, they could swing 40fps and end up above the speed limit, but restarting EITHER unit would bring them both within 20fps of each other, so I really don't see that as a REAL issue.

And again, I do think the offset issue will be resolved by future firmware updates. It has been partially solved already. The volatility is still very tight, set up is fast and easy, nothing downrange, so they're always going to be superior to any optical chronograph, and most folks will find them easier to set up than Magnetospeeds, so ultimately, at sub $300, with a little higher volatility and some quirks compared to the $500 Garmin or LabRadar, it's still a great unit.

Maybe another statement which might mean a lot to folks - I'd rather buy an Athlon than a LabRadar V1 at the same price today.
Thanks V, your typically thorough and thoughtful response. Really appreciate it.
 
  • Like
Reactions: Varminterror
@Turkeytider - Addressing your 2 posts here based on the 2 posed or implied questions:

1) How has Athlon responded to these results?
2) Can the Athlon work for some shooters as it is?

I'll try to be succinct as possible here:

1) How has Athlon responded to my results? First off, reiterating, I'm not affiliated with any of these companies. I bought my first Garmin and Athlon units, and I won my LabRadar LX at Ko2M - but would have bought one if I hadn't won it. My "second unit" of each of these brands has been loaned to me either by the companies or their reps, just to help me avoid long-term borrowing units from other shooters and enable improved data capture opportunities (such as the Static vs. Gun Mounted Velocity Test I shared), really just out of good faith collaboration.

Second, and really emphasizing that reiteration, I really won't pretend I'm materially important to anything Athlon is doing, or Garmin or LabRadar. As I approached this comparison test, I reached out to each of these companies and asked if there were any ways I would fuck up and misrepresent the untis, or any ways I could unfairly bias the test, and if there were any specific features they felt made their brand/model stand out. Most companies were interested in discussing the test, but again, I'm just a peckerwood who lives in the hills with too many guns, and all I did was make a few phone calls - the companies just want to support the tribal knowledge of our universe, so they were willing to help out in good faith and loan the demo units.

With that out of the way... Regarding how I see Athlon responding to user feedback, including my own: Specifically, I contacted Athlon because 2 units I had on hand were reading incremental offsets when compared to other chronographs. Athlon offered troubleshooting processes for temporary solutions, and then within a week, I received a text that they were issuing a firmware update the next week which should fix the issue - and it DID improve the occurrence from nearly 100% of sessions down to only 1/3 or 1/4 of sessions. Equally, I had connectivity and sync issues with my 3rd Athlon unit, and their response was again, advice for immediate troubleshooting processes - which did not work - but then after about a month of manually entering hundreds of data points, a firmware update solved that problem. I do know other users are having issues, and I know Athlon is still working on some - it seems the connectivity/sync issues are solved for almost all iOS users, but still not quite ironed out for Android/Samsung phone users, so that's still work in progress. All of that is to say that I have seen firsthand that Athlon is offering CS to productively support user issues, and that they are making incremental firmware updates to their app and to their unit software to improve reliability.

I also acknowledge, as objectively critical as I am of ALL of these units, the Athlon Velocity Pro simply is younger in market and is racing to catch up. LabRadar and Garmin have the luxury of having already survived many of these growing pains because they already have years of head start in this practice - LabRadar had the V1 on the market for a decade before these others, and Garmin has had multiple Radar devices on the market for a few years before the C1, and has had dozens of bluetooth communicating devices on the market for over a decade. So it's reasonable to expect that Athlon would have more CURRENT issues as they are sprinting through their learning curve to catch up - a year or two from now, the expectation of nearly-perfect functional execution is more reasonable than it is today.

2) Can the Athlons work for some users as is? As I mentioned above - We're 7 pages deep into this thread, so I know my post was kind of lost in the depths at this point, but I did acknowledge in this thread, the Athlons can be fruitfully deployed by a great percentage of shooters. Who would be at risk, in my opinion, and how?

--> With the intermittent high/low offset issue: ELR shooters would be sensitive to the ~30fps spread of potential results I've seen. This is roughly +/-0.5% error from truth, so as good or better than any common optical chronograph from history, but still looser than we want. But an ELR shooter would sail above or sink below targets. A PRS shooter can likely honestly be just fine with this error, and a shooter with a careful sense of attention might simply restart their Athlon when they notice a marked high or low result for a known load.

--> Shooters watching closely velocity as a means of tracking barrel life or watching for carbon ring indicators might be tricked by the offset. Again, simply resetting the unit would correct or confirm the change, but this would be an extra step.

--> Someone trying to compare multiple loads spread over multiple days, where +/-0.5% might represent the difference between each load could be mislead into trusting incorrect relative results. Does that difference of 30fps really matter? eh, probably not. But if I'm trying to compare, say, two primers, or two lots of powder, or two different brands of cases, or worst of all, looking at one set of charge weights one day against any of the other parameters on the next day, that intermittent offset issue could bite me in the ass. BUT, if a shooter is sure to shoot both comparative samples on the same day, they'd be just fine.

--> An extension of the latter, but an incredibly small niche, if a shooter is trying to test for temperature sensitivity of their load, the intermittent offset issue could lead to false results in the correlation curve.

I don't think that the higher volatility I've seen with the units (inherent noise between readings of a string) is really of consequence for anyone. Over large sample sets, the SD's (strangely) seem to end up very similar, but when you look at the results as a trendline, the volatility is visibly higher... Inconsequential, overall, maybe I guess a guy with a Garmin could post online that display picture with 5 shots and an ES of 7 but the Athlon might read that same string with an ES of 10, but other than bragging online, that's really an inconsequential difference. It looks ugly to a data hound, but an application/field engineer recognizes noise as noise, and signal as signal.

I also don't think even the guys riding the ragged edge of Power Factors or velocity speed limits could get themselves in hot water, because the maximum offsets I'm seeing are in the 20's fps... Maybe if the absolutely unlikely coincidence of a guy measuring their load velocity at home and reading 20fps LOW then going to a match and being checked by a different Athlon which intermittently measured 20fps HIGH that day, they could swing 40fps and end up above the speed limit, but restarting EITHER unit would bring them both within 20fps of each other, so I really don't see that as a REAL issue.

And again, I do think the offset issue will be resolved by future firmware updates. It has been partially solved already. The volatility is still very tight, set up is fast and easy, nothing downrange, so they're always going to be superior to any optical chronograph, and most folks will find them easier to set up than Magnetospeeds, so ultimately, at sub $300, with a little higher volatility and some quirks compared to the $500 Garmin or LabRadar, it's still a great unit.

Maybe another statement which might mean a lot to folks - I'd rather buy an Athlon than a LabRadar V1 at the same price today.
I have also been running the Garmin side by side with the Athlon and would like to pick your brain on a couple of things. I am still using release firmware on the Athlon BTW, I will be updating it before my next range session but was initially hesitant based on the fact Garmin had updates where their unit operated quite poorly relative to the release firmware. Last range session I had a few mis-reads though so I'm going to update now.

1) Do your Athlon units operate well in the 4"-10" distance from the muzzel range the directions indicate? Mine misses shots when its that close and seems to want to be 20+ inches behind and to the side. I have not mounted mine to the rifle, are you getting good results in that configuration?

2) When you say the Athlon is not showing as good of an SD and ES and that the data is more volatile, are you seeing something like 1-2 shots out of 50 that are off by 100fps and the rest of the shots fall in a similar pattern to the Garmin or does the overall spread look wider? How much difference are you seeing between Garmin and Athlon SD and ES numbers in a data set? Do you think were are talking random individually misread shots, systematically less precision, or both?
 
Buckle up, ski bunnies, we're going on a trip.


I have also been running the Garmin side by side with the Athlon and would like to pick your brain on a couple of things. I am still using release firmware on the Athlon BTW, I will be updating it before my next range session but was initially hesitant based on the fact Garmin had updates where their unit operated quite poorly relative to the release firmware. Last range session I had a few mis-reads though so I'm going to update now.

I think the firmware updates have partially fixed the offsets I experienced, and I hope future firmware updates completely eliminate them. My first few days on the line with the Athlons against other brands saw teens to 20's high or low, on every session. Now, the issues seem to only occur 1/4-1/3 of sessions, more often high than low, and typically LESS offset when low than when high (so low offsets might be 5-7fps, high offsets might be 10-25fps). It's better than it was originally in May, but not quite where I feel they need to be quite yet.

1) Do your Athlon units operate well in the 4"-10" distance from the muzzel range the directions indicate? Mine misses shots when its that close and seems to want to be 20+ inches behind and to the side. I have not mounted mine to the rifle, are you getting good results in that configuration?


I can't say I have noticed any specific issues with position of the Athlons - or the Garmins, or the LabRadar LX's either for that matter. By and large, I don't worry about it, I put them where I want them, and they pick up shots.

For all of my comparison testing, I've made sure they were all at least laterally within the proper width to the sides, such as using that fixture I have pictured below.

Here is a shot of the first time I did a side-by-side comparison with the 3 brands (brake blast knocked over one of my Garmins). That's a 375CT rifle, so the units are well behind the muzzle by more than 10", and I'd be certain the outside units are more than 10" wider to the side as well, since I know that bipod mat is wider than 21". But they all read - when they weren't knocked over.
1757683592651.png


I'd done this quite a bit with my Garmin over time, so I tried it with the Athlon, and it also worked - I can stand behind shooters, likely 10-15ft behind the muzzle and 3-4ft above bore and pick up shots just by holding the Garmin or Athlon like a GoPro.
1757683616051.png


Going the opposite direction now, and getting closer to the gun: I built this rig to control position during the bulk of my comparative testing work, all of the small units are positioned within the boundary of the 10" limit from the barrel, and have hundreds of rounds fired in this way.
1757684742553.png



I shared on this forum my Static vs. Gun Mounted test results - showing the difference in muzzle velocity reading when mounted to the rifle vs. to a tripod. In this photo, you can't see the second Athlon mounted on the far side of the rifle - the tension knob of the arca mount its on is visible; the green knob in front of the Garmin & LabRadar mount.
1757687407153.png


Opposite side view while my son was setting up the Athlon on the gun mount.
IMG_2702.jpeg


So I guess I just haven't ran into any issues with placement which would give me concern - so I don't concern myself with it. HOWEVER, I AM working now on a series of tests which will let me watch the sensitivity of the units to direction of aim and relative unit position to the muzzle, both in ability to pick up shots as well as the influence of angle and position on the measured velocity.

2) When you say the Athlon is not showing as good of an SD and ES and that the data is more volatile, are you seeing something like 1-2 shots out of 50 that are off by 100fps and the rest of the shots fall in a similar pattern to the Garmin or does the overall spread look wider? How much difference are you seeing between Garmin and Athlon SD and ES numbers in a data set? Do you think were are talking random individually misread shots, systematically less precision, or both?

Actually, my personal perspective about my observations of this data is that "its really fucking weird," but I can tell it's systemic noise, not random bad readings. It is NOT 1-2 shots which are off by 100fps, and the easiest answer to that is that the ES and SD aren't artificially higher than expected, and are not spread inappropriately. I'm talking about the actual data volatility, not simplified metrics.

TLDR version: Irrefutably, no, this is not 1-2 shots which are off by 100fps, and is not being caused by a few random mis-readings. These data sets pass all of the common heuristic tests to confirm that the ES's and SD's fall within expected ~4.5-5.5x ratio between ES and SD, and frankly, that "really fucking weird" part is the fact that the ES's and SD's aren't even materially different in most cases - but when I visually inspect the plotlines for these sample sets, I can see higher volatility. These data series all follow a Normal Distribution trend, and my data sets, which I've shared in their entirety for the comparisons I have published, are NOT being skewed by a few bad readings. This is systemic volatility, and the fact the Athlons are reading considerably higher volatility than the other two brands suggests it is a lower SNR than the other brands - more noise, less accuracy.

That's all kind of hard to put into words, but:

1) Variability between 2 units of the same brand, and possibility/impossibility to achieve potential +/-0.1% accuracy specification: I have fired several hundred rounds in testing with 2 each of Athlons, Garmins, and LabRadar LX's side by side, as pictured above in that holding fixture. This didn't give me a direct means of measuring how close any chronograph would be to "truth," but it DOES give me defensible evidence that a brand would be FAILING to be within their specified +/-0.1% accuracy to "truth." If both units of a given brand were always within +/-0.1% of "truth," then the two units could never be more than 0.2% of the MV apart, otherwise we KNOW that at least ONE of them, or BOTH, must be more than +/-0.1% "wrong" from truth. The FARTHEST they could ever be from "truth" is 0.1%, then if one is as slow as it could be and still be "true," and the other as fast as it could read and still be "true," then they can only be 0.2% of the MV apart.

So I tested that...

This is a 100 round rimfire data set with ~1085fps average, so the +/-0.1% expectation means we can only be +/-1.1fps, or only be 2.2fps apart for each brand, or we know at least ONE of them is not reading within specified accuracy to "truth". The Garmins were never more than 1.5fps apart over 100 rounds, which is within the possibility that BOTH units are reading every shot's true value, within 0.1%. The LabRadar LX's farthest spread between the 2 units was 4.68 fps (they read to the 1/100ths digit), which is slightly more than 0.4% apart, outside of the +/-0.1% claim by more than double, meaning at least one, if not both were NOT displaying the true speed. The Athlons also had a max spread between the two of 4.6fps, again, 0.4% of the MV from one another, such it is impossible that both units could be reading within +/-0.1% of the true speed. But never in those 100 rounds were the Garmins outside of 1.5fps apart, meaning both COULD be displaying the true speed, within their specified +/-0.1%.
1757694031559.png


Another 100 round rimfire data set where the Average was 1200fps (coincidentally exactly), again, +/-0.1% corresponds to a span between 2 units never being more than 2.4fps, the Garmin achieved this, the LabRadars missed by about 50%, so just outside of +/-0.15% apart, but the Athlons were 6.5fps apart, which would be equivalent to +/-0.27% spread, not 0.1%...
1757694766371.png


Here's a 2805fps average centerfire dataset analysis - 51 rounds across the Garmins and Athlons, but I missed just enough shots that I could only reconcile 39 rounds with the LabRadars. 2805 Average means +/-0.1% is 5.6fps spread to be within tolerance, and again, ONLY the Garmins achieved that potential, with the max spread only being 2.8fps between the two. The LabRadar's were 15.2fps apart, which would be in the +/-0.3% ballpark, and the Athlons were as far apart as 19.4fps apart, meaning roughly +/-0.4% band. The AVERAGE difference between the two Athlons was outside of the +/-0.1% tolerance, effectively, we know that for at least 25 of the 51 shots, the Athlons were outside of their specified tolerance to truth.
1757695059139.png


Comments to these data sets:

--> ALL of these velocities COULD be "wrong," and at least ONE of each brand COULD be right, but we can note that it simply isn't possible for BOTH to be within their spec for proximity to "truth" if we have more than twice their spec between their readings, so ONLY the Garmins COULD both be right in these tests, and we know at LEAST ONE of the Athlons and LabRadars are wrong, if not both.

--> If you note the Max vs. STDEV spreads, each ES is roughly 4-5x the SD for all 3 brands in all 3 of these experiments, which is typical of a Normal Distribution Sample Set, so it's not a matter of the brands reading one or two bad readings and making big, false ES's, this is simply systemic noise indication.

2) Visual inspection of data plotlines for volatility observation: It's relatively quick and easy to plot these data sets and visually inspect for outliers which might skew the results. Again, I ran the quick and dirty heuristic test as to whether the ES and SD ratio made sense for a Sample Set which followed a Normal Distribution or not (they all do), but I also plotted these data strings to allow visual confirmation. Because the data strings inherently cross back and forth (velocity high, velocity low, high, low, etc), I re-ordered the data based on increasing Average Velocity (average of all 6 chronos), and re-plotted. We can see a smooth curve produced with no random outliers for any of the 6 chronographs, and tracing a Normal Distribution, center weighted trend with high and low tailings. This re-plott allows us to look for random spikes and "bad readings," and evaluate the distribution of the datasets as well as evaluate the systemic NOISE and VOLATILITY of the datastrings, beyond the distilled ES and SD data. And here, we see higher volatility from the Athlons than from the Garmins or LabRadars.

So I wanted to see if the large number of samples in the set could be hiding 1-2 big flukes and diluting their influence on the SD, so I looked for that...

Here's a 99 round rimfire set (CCI SV, if anyone is interested, which had SD's of 12.7-12.9 across all 6 units). The ES's measured by each of the 6 units were within 2fps of one another, ranging from 69.0 ES to 70.6ES, and again, all 6 units agreed the SD's were 12.7fps to 12.9fps. But when you look at the dataplots visually, the volatility behavior becomes more obvious. The Purple/Pink trends are Athlons, the Greens are Garmins, and the Oranges are LabRadars. In this plot, we see the Athlons were ~5fps high offset from the others, but also notice that the trendline is more volatile, rougher, more noisy. The Garmins had the tightest lines, bouncing within the peaks of the LabRadars, but the Athlons had notably more noise than the other two brands.
1757696419785.png


Here's the centerfire 51 round sample set mentioned above. In this dataset, the LabRadars were slightly low offset, 2-3fps slower in Average MV than the Garmins, and the Athlons were repeatedly offset 8-9fps higher than the Garmins. Again, the ES's were very similar, 42.5fps ES to 46.9fps ES, and the SD's measured by all 6 showed 7.9fps to 9.6fps. Really not much of a difference there (1.7fps difference in SD's, on units which should only be reading within +/-2.8fps of truth). But visually, again, we can see 1) there are no huge spikes from fluke outliers, and 2) these Purple lines are more noisy than the Green or Orange lines. (***Noting here, this plot is zoomed in, half as wide and ~17% zoomed for height, so smaller ES and SD looks more noisy than the plot above.***)
1757696977233.png


Overall, as I mentioned previously, I'm seeing substantially higher volatility in the Athlon results. They're still a better chronograph than any common optical chronograph on the market, and the lower cost of the Athlon compared to the LabRadar LX and the Garmin may make absolute sense for someone who isn't shooting ELR or doesn't rely upon comparing data from sessions which are separated by multiple days, but there's a performance difference I can measure right now. Firmware MAY fix this, and I'll keep testing over time to see if and when it does.