Originally Posted by Jackson_Handy
Originally Posted by koshkin
Originally Posted by Jackson_Handy
Originally Posted by koshkin
Originally Posted by Jackson_Handy
Originally Posted by koshkin
Given that I can not release the results done by the manufacturers (lots of NDAs)......


How many companies and which do you have "NDAs" with?

Could you go more into detail about the "well structured tests" you've seen? What did the tests consist of?

Originally Posted by koshkin
Any scope I might go hunting with is subjected to some number of recoil cycles and then lives in the trunk of my car properly mounted on a rifle in a soft case bouncing around for a couple of weeks. If something snuck by QC, it will come out. Beyond that, you are just playing the odds. Anything can break and occasionally does.

I'm sure you're aware that what you posted in the quote above is a small part of what Form does, correct? Are you implying that when you do it, it's valid? Or do you do it knowing it means nothing and isn't statistically meaningful?

It is most certainly not meaningful in any statistical sense. I am implying it is valid on that one scope and one scope alone. That has no bearing whatsoever on any other sample of the same product line.

I am trying to get past the period where any infant mortality issues with my specific scope might pop up to minimize the chance of catastrophic failure in the field for that one specific scope. That's it. Nothing more.

Also keep in mind that I never fudge the business with the mounts since I have no interest in making any scope look good or look bad.

As for NDAs, one of the first sentences in almost any NDA is that I am not supposed to disclose who that NDA is with.

ILya

What if you performed your own trunk test with four different samples of the same scope and all four failed to retain zero. Would you be concerned or just chalk it up to bad luck?

What does "udge the business with the mounts" mean and what are you inferring?


So you can't talk about your NDAs, but you can make accusations about others? (It's well documented you've accused form of being paid by optics companies) Seems a bit disingenuous to me.

That was supposed to say "fudge". Typo.

Four is not enough for anything statistical, but it would be more significant than a sample of one. It would still not tell us anything about a probability of a long term problem, but it would tell us more about the possibility of an early issue or lack thereof, than looking at just one. Not a whole lot more though since it is not like I have a set off road course I drive through to replicate the exact impacts. There are always small manufacturing inconsistencies and machining marks. Sometimes simply using the scope works through that. Again, it does nothing statistical, but it is meaningful for that specific scope. I remember I once had a 5-25x56 Strike Eagle that did a weird tracking thing. My best guess is that there was a tiny machining mark on the turret contact pad. After twisting the turrets back and forth a few dozen times it went away. My best guess is that the machining mark simply wore in. Does that imply anything for the rest of the Strike Eagle scopes? Not a damn thing.

ILya

I meant to type "fudge". So that statement was basically saying you know how to mount scopes?

The issue with your strike eagle, what if it occurred with 6 different strike eagles? What would the statistical probability be of getting 6 that malfunctioned? How many would need to be tested to be statistically valid? 50, 100, 1k?

Do you know of any optic company that tests rifle scopes for zero retention from impacts? As in mounting them to a rifle, shoot, impact, shoot? If so, who, if not why?

Yes, I know how to mount scopes, but the statement was meant to say that if someone wants to get a particular result it is very easy to do so with selective scope mounting. Frankly, it is a pretty straightforward thing and it is remarkable how many problems come from scope mounting. Yet, they do.

No scope company tests every scope that way. Every respectable company I can think of does that during the product development process many times. Hell, I've done it for a couple of them in the past. It ain't rocket science if you pay attention to consistency. Expensive scopes get some amount of individual evaluation, i.e. every scope gets looked at. As they become less expensive, companies do spot checks. For some larger companies, they will pull a scope from every lot and beat it up a little to see if there are systemic lot to lot variations.

How much testing is statistically significant is hard to say because lot to lot variation plays a role. Assuming manufacturing is consistent and expected failure rate is in the 1% to 4% range depending on the scope price and build quality, looking at 40-50 scopes should be reasonably indicative. More is, of course, better. Some of that can be short-circuited with accelerated wear testing. Some companies do it, but it is hard to say how many.

Here is the rub: most companies do not make enough scope to easily afford to go bang up 50 of them, so there is a lot of educated guessing and trusting the OEM happening.

Once the scope is out, a lot is learned from simply analyzing service department records. That's usually where we find the most statistically significant data on how different designs hold up under average use.


ILya