Given that I can not release the results done by the manufacturers (lots of NDAs)......
How many companies and which do you have "NDAs" with?
Could you go more into detail about the "well structured tests" you've seen? What did the tests consist of?
Originally Posted by koshkin
Any scope I might go hunting with is subjected to some number of recoil cycles and then lives in the trunk of my car properly mounted on a rifle in a soft case bouncing around for a couple of weeks. If something snuck by QC, it will come out. Beyond that, you are just playing the odds. Anything can break and occasionally does.
I'm sure you're aware that what you posted in the quote above is a small part of what Form does, correct? Are you implying that when you do it, it's valid? Or do you do it knowing it means nothing and isn't statistically meaningful?
It is most certainly not meaningful in any statistical sense. I am implying it is valid on that one scope and one scope alone. That has no bearing whatsoever on any other sample of the same product line.
I am trying to get past the period where any infant mortality issues with my specific scope might pop up to minimize the chance of catastrophic failure in the field for that one specific scope. That's it. Nothing more.
Also keep in mind that I never fudge the business with the mounts since I have no interest in making any scope look good or look bad.
As for NDAs, one of the first sentences in almost any NDA is that I am not supposed to disclose who that NDA is with.
ILya
What if you performed your own trunk test with four different samples of the same scope and all four failed to retain zero. Would you be concerned or just chalk it up to bad luck?
What does "udge the business with the mounts" mean and what are you inferring?
So you can't talk about your NDAs, but you can make accusations about others? (It's well documented you've accused form of being paid by optics companies) Seems a bit disingenuous to me.
That was supposed to say "fudge". Typo.
Four is not enough for anything statistical, but it would be more significant than a sample of one. It would still not tell us anything about a probability of a long term problem, but it would tell us more about the possibility of an early issue or lack thereof, than looking at just one. Not a whole lot more though since it is not like I have a set off road course I drive through to replicate the exact impacts. There are always small manufacturing inconsistencies and machining marks. Sometimes simply using the scope works through that. Again, it does nothing statistical, but it is meaningful for that specific scope. I remember I once had a 5-25x56 Strike Eagle that did a weird tracking thing. My best guess is that there was a tiny machining mark on the turret contact pad. After twisting the turrets back and forth a few dozen times it went away. My best guess is that the machining mark simply wore in. Does that imply anything for the rest of the Strike Eagle scopes? Not a damn thing.
ILya
I meant to type "fudge". So that statement was basically saying you know how to mount scopes?
The issue with your strike eagle, what if it occurred with 6 different strike eagles? What would the statistical probability be of getting 6 that malfunctioned? How many would need to be tested to be statistically valid? 50, 100, 1k?
Do you know of any optic company that tests rifle scopes for zero retention from impacts? As in mounting them to a rifle, shoot, impact, shoot? If so, who, if not why?
I’ve done various ‘quality engineering’ tasks in my career. A very high confidence level for a particular variable can be obtained by measuring that variable on 30 sample parts that are all produced using the exact same process, materials, tools, etc… However, the level of confidence needed (acceptance criteria) is established based on a lot of inputs (anticipated production/sales volume, acceptable deviation from nominal, severity of non-conformance, etc…). Not all brands will come up with the same acceptance criteria based on their business model.
Something to keep in mind is that any variable (dimension, material property, etc) that could result in a failure, like a shift in zero or tracking variance, has to be collected and monitored in order to ensure conformance. Usually, this is multiple characteristics on each individual part. In the case of a scope there would be multiple parts that could give similar failure modes. I note that to say that for lower volume brands, it may well be less costly to fix as fail, based on the customer’s feedback, rather than invest in the resources to develop detailed quality control plans and maintain them.
Don't speculate when you don't know, and don't second guess when you do.
Given that I can not release the results done by the manufacturers (lots of NDAs)......
How many companies and which do you have "NDAs" with?
Could you go more into detail about the "well structured tests" you've seen? What did the tests consist of?
Originally Posted by koshkin
Any scope I might go hunting with is subjected to some number of recoil cycles and then lives in the trunk of my car properly mounted on a rifle in a soft case bouncing around for a couple of weeks. If something snuck by QC, it will come out. Beyond that, you are just playing the odds. Anything can break and occasionally does.
I'm sure you're aware that what you posted in the quote above is a small part of what Form does, correct? Are you implying that when you do it, it's valid? Or do you do it knowing it means nothing and isn't statistically meaningful?
It is most certainly not meaningful in any statistical sense. I am implying it is valid on that one scope and one scope alone. That has no bearing whatsoever on any other sample of the same product line.
I am trying to get past the period where any infant mortality issues with my specific scope might pop up to minimize the chance of catastrophic failure in the field for that one specific scope. That's it. Nothing more.
Also keep in mind that I never fudge the business with the mounts since I have no interest in making any scope look good or look bad.
As for NDAs, one of the first sentences in almost any NDA is that I am not supposed to disclose who that NDA is with.
ILya
What if you performed your own trunk test with four different samples of the same scope and all four failed to retain zero. Would you be concerned or just chalk it up to bad luck?
What does "udge the business with the mounts" mean and what are you inferring?
So you can't talk about your NDAs, but you can make accusations about others? (It's well documented you've accused form of being paid by optics companies) Seems a bit disingenuous to me.
That was supposed to say "fudge". Typo.
Four is not enough for anything statistical, but it would be more significant than a sample of one. It would still not tell us anything about a probability of a long term problem, but it would tell us more about the possibility of an early issue or lack thereof, than looking at just one. Not a whole lot more though since it is not like I have a set off road course I drive through to replicate the exact impacts. There are always small manufacturing inconsistencies and machining marks. Sometimes simply using the scope works through that. Again, it does nothing statistical, but it is meaningful for that specific scope. I remember I once had a 5-25x56 Strike Eagle that did a weird tracking thing. My best guess is that there was a tiny machining mark on the turret contact pad. After twisting the turrets back and forth a few dozen times it went away. My best guess is that the machining mark simply wore in. Does that imply anything for the rest of the Strike Eagle scopes? Not a damn thing.
ILya
I meant to type "fudge". So that statement was basically saying you know how to mount scopes?
The issue with your strike eagle, what if it occurred with 6 different strike eagles? What would the statistical probability be of getting 6 that malfunctioned? How many would need to be tested to be statistically valid? 50, 100, 1k?
Do you know of any optic company that tests rifle scopes for zero retention from impacts? As in mounting them to a rifle, shoot, impact, shoot? If so, who, if not why?
Yes, I know how to mount scopes, but the statement was meant to say that if someone wants to get a particular result it is very easy to do so with selective scope mounting. Frankly, it is a pretty straightforward thing and it is remarkable how many problems come from scope mounting. Yet, they do.
No scope company tests every scope that way. Every respectable company I can think of does that during the product development process many times. Hell, I've done it for a couple of them in the past. It ain't rocket science if you pay attention to consistency. Expensive scopes get some amount of individual evaluation, i.e. every scope gets looked at. As they become less expensive, companies do spot checks. For some larger companies, they will pull a scope from every lot and beat it up a little to see if there are systemic lot to lot variations.
How much testing is statistically significant is hard to say because lot to lot variation plays a role. Assuming manufacturing is consistent and expected failure rate is in the 1% to 4% range depending on the scope price and build quality, looking at 40-50 scopes should be reasonably indicative. More is, of course, better. Some of that can be short-circuited with accelerated wear testing. Some companies do it, but it is hard to say how many.
Here is the rub: most companies do not make enough scope to easily afford to go bang up 50 of them, so there is a lot of educated guessing and trusting the OEM happening.
Once the scope is out, a lot is learned from simply analyzing service department records. That's usually where we find the most statistically significant data on how different designs hold up under average use.
Once the scope is out, a lot is learned from simply analyzing service department records. That's usually where we find the most statistically significant data on how different designs hold up under average use.
That's the strategy where the customer who bought the product also serves as the durability test function.
I'll take a guess that companies who use that strategy as a principle measure of quality/durability speak the loudest about their exceptional warranty/customer service policy in media ads.
Some companies DO have either in house or use test labs to validate their product. My comment above is not specifically aimed at optics companies.
Once the scope is out, a lot is learned from simply analyzing service department records. That's usually where we find the most statistically significant data on how different designs hold up under average use.
That's the strategy where the customer who bought the product also serves as the durability test function.
I'll take a guess that companies who use that strategy as a principle measure of quality/durability speak the loudest about their exceptional warranty/customer service policy in media ads.
Some companies DO have either in house or use test labs to validate their product. My comment above is not specifically aimed at optics companies.
With most companies I know, marketing people and service/repair people do not talk to each other very much. There are, of course, exceptions.
I recall a guy from one of the Instagram optics companies (Atibal, I think) telling me that all this product development product is a bunch of nonsense and he can get any product configuration to marker in 6 to 12 months: just tell the chinese what specs you want and they make it. Well, that explains his products.
Serious companies have serious engineering groups and serious quality departments.
Given that I can not release the results done by the manufacturers (lots of NDAs)......
How many companies and which do you have "NDAs" with?
Could you go more into detail about the "well structured tests" you've seen? What did the tests consist of?
Originally Posted by koshkin
Any scope I might go hunting with is subjected to some number of recoil cycles and then lives in the trunk of my car properly mounted on a rifle in a soft case bouncing around for a couple of weeks. If something snuck by QC, it will come out. Beyond that, you are just playing the odds. Anything can break and occasionally does.
I'm sure you're aware that what you posted in the quote above is a small part of what Form does, correct? Are you implying that when you do it, it's valid? Or do you do it knowing it means nothing and isn't statistically meaningful?
It is most certainly not meaningful in any statistical sense. I am implying it is valid on that one scope and one scope alone. That has no bearing whatsoever on any other sample of the same product line.
I'm pretty sure that Form makes the same sort of claim in his tests.
koshkin, aside from Arken's apparent lack of interest in QC and CS, have you found their products to perform pretty well?
Hard to say. I have one here and it works well for the money. However, the failure rates on them seem to be so high that I do not know what to make of them. I know one Youtuber who received six different Arkens over the last couple of years and three had to go back for various problems. At every rimfire match, when I talk to the match director, they say that every match they run, one or two Arkens go down. They get the look and feel right, but seem to need some work on the fundamentals.
For me, when I review a scope, I have to have reasonable faith that they make them consistently. I am sorta rooting for Arken to get it all worked out since I like what they are doing with the potential value proposition. Until I see some consistency, I'll be watching from the sidelines.
This has been very informative and makes sense to me from a practical level. I'm a scientist by training and statistics make sense sense to me, especially on this subject. There is no such thing as a zero failure rate. I've also long thought that some companies use buyers as their QA/QC department.
You mentioned a few mounts I've seen but not used - I'll have a look.
I hope we can keep this respectful and learn - and not turn this into an aggressive name-calling pissing match.
I remember I once had a 5-25x56 Strike Eagle that did a weird tracking thing. My best guess is that there was a tiny machining mark on the turret contact pad. After twisting the turrets back and forth a few dozen times it went away. My best guess is that the machining mark simply wore in.
ILya
While not the scope you reference, I posted on this a while back with a pic or two so people could see the problem. When I took the scope apart, the erector tube had some nasty gouges where the turret pad contacted it. After I polished those out, the scope tracked repeatably and very close to the mfgs stated incrementals for adjustment.
I've never found it a problem,to simply procure that which intrigues and flog upon same. If it connects dots,I buy more. Hint.
Not that the Fixture Fhuqkery and lack of Live Rounds,ain't funnier than fhuqk. Hint.
Just sayin'..................
Brad says: "Can't fault Rick for his pity letting you back on the fire... but pity it was and remains. Nothing more, nothing less. A sad little man in a sad little dream."
Koshkin, You say that most reputable brands do some sort of drop or impact test for zero retention. Shoot to verify baseline, drop or create impact of some sort, then shoot again to verify zero retention. I think that’s what you said. Are they actually doing this in a manner that imparts lateral force via side impacts? To mimic actual drops that might occur in the field? Or via some sort of machine that only mimics longitudinal force (recoil simulation)?
I haven’t seen much of the former (outside of Nightforce). I have seen plenty of the latter, and frankly it doesn’t mean much to me. Of course a scope should hold up to recoil as that’s the very job it was assigned to do. That’s a minimum level expectation. Holding up to recoil of hunting rounds, magnums included, is really nothing exceptional, imo. Please show me exceptional so that I know it will easily hold up to the routine. Of course I also expect the tires I buy to be round, hold air, and roll down the road too.
And if a scope is truly drop tested/torture tested in some manner, why isn’t that shown in marketing materials? Nightforce seems to be the only company that actually demonstrates their durability claims. Consequently, I own more NF scopes than any other brand, and they are on all of my most important hunting rifles.
As a hunter, durability is my #1 concern. I buy quality scopes that naturally have good glass. The glass nuances and subtleties on $1000+ scopes are mostly meaningless for most hunters. Why do marketing departments focus so much energy on stuff that matters less? Show us more of the torture testing and we’d probably buy a lot more of those scopes.
Koshkin, You say that most reputable brands do some sort of drop or impact test for zero retention. Shoot to verify baseline, drop or create impact of some sort, then shoot again to verify. I think that’s what you said. Are they actually doing this in a manner that imparts lateral force via side impacts? To mimic actual drops that might occur in the field? Or only via some sort of machine that only mimics longitudinal force (recoil simulation)?
I haven’t seen much of the former (outside of Nightforce). I have seen plenty of the latter, and frankly it doesn’t mean much to me. Of course a scope should hold up to recoil as that’s the very job it was assigned to do. That’s a minimum level expectation. Holding up to recoil of hunting rounds, magnums included, is really nothing exceptional, imo. Please show me exceptional so that I know it will easily hold up to the routine. Of course I also expect the tires I buy to be round and roll down the road too.
And if a scope is truly drop tested/torture tested in some manner, why isn’t that shown in marketing materials? Nightforce seems to be the only company that actually demonstrates their durability claims. Consequently, I own more NF scopes than any other brand, and they are on all of my most important hunting rifles.
As a hunter, durability is my #1 concern. I buy quality scopes that naturally have good glass. The glass nuances and subtleties on $1000+ scopes are mostly meaningless for most hunters. Why do marketing departments focus so much energy on stuff that matters less? Show us more of the torture testing and we’d probably buy a lot more of those scopes.
That's the thing - manufacturers like Leupold have a "trust me bro, our scopes are fine, you're just mounting them wrong" attitude, like the claims on their now defunct video. If they showed good testing we'd be more likely to believe the claims. The thing with Form's tests is they might not have been statistically viable, but they often tended to reflect the results many of us were already seeing in the field with the exact scopes that were failing. If someone has a better test, bring it on. In the meantime, we go with the data presented in the tests available to us.
Set aboard my 18lb+ Bart' barreled Vudoo at 50yds,with 20 Mil's in the erector,between shots. Hint.
Remounted it a coupla weeks later,for more laughs. 3-shot cluster with only zoom manipulations(swung stop to stop twice),the wild brace with 10 Mil's of erector between pokes. Hint.
As per always,when the Swiffer 5-20x HD FFP MQ LitBitch goes back on board,it's in the .2's. Hint.
Never not interesting,to extrapolate side by each,with Live Fire. Hint.
Just sayin'.................
Brad says: "Can't fault Rick for his pity letting you back on the fire... but pity it was and remains. Nothing more, nothing less. A sad little man in a sad little dream."
Set aboard my 18lb+ Bart' barreled Vudoo at 50yds,with 20 Mil's in the erector,between shots. Hint.
Remounted it a coupla weeks later,for more laughs. 3-shot cluster with only zoom manipulations(swung stop to stop twice),the wild brace with 10 Mil's of erector between pokes. Hint.
As per always,when the Swiffer 5-20x HD FFP MQ LitBitch goes back on board,it's in the .2's. Hint.
Never not interesting,to extrapolate side by each,with Live Fire. Hint.
Just sayin'.................
You're preaching to the choir
I got banned on another web site for a debate that happened on this site. That's a first
... The thing with Form's tests is they might not have been statistically viable, but they often tended to reflect the results many of us were already seeing in the field with the exact scopes that were failing. ...