![]() |
If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. |
|
|
Thread Tools | Display Modes |
|
#1
|
|||
|
|||
![]()
Derek Lyons wrote:
(phil hunt) wrote: [...] The issue isn't programmers Phil. The issue the massive amounts of R&D to develop the information needed to specify the sensor that the programmers will process the output of. The sensors needed are visual and IR imaging. It doesn't require a massive R&D program to determine that, or to decide which combinations of number of pixels and widths of field of view are appropriate. Ah, another problem handwaved away. You not only lack a clue, you are aggressive in avoiding obtaining one. Hmm. I've done several iterations of this problem, though not with systems that went to full scale development or production. I believe that for suitably moderated operational requirements, the problem can be much simpler than I believe Derek thinks it is. I belive that Phil is grossly underestimating the real requirements, even for those suitably moderated operational requirements. There is a fair amount of open source material on various small guided weapon R&D and proposals. Unless those were all wrong, it can be a lot simpler than current 'standard' weapons programs. But few of those have progressed to production. The new Marines/Navy Spike missile is one exception, and to some degree is the exception that probably proves the rule. Their R&D budget essentially was hidden in the slush funds at China Lake for a couple of years, and the missile itself is estimated to cost at most a few thousand dollars. -george william herbert |
#2
|
|||
|
|||
![]()
On 20 Dec 2003 14:25:46 -0800, George William Herbert wrote:
Derek Lyons wrote: (phil hunt) wrote: [...] The issue isn't programmers Phil. The issue the massive amounts of R&D to develop the information needed to specify the sensor that the programmers will process the output of. The sensors needed are visual and IR imaging. It doesn't require a massive R&D program to determine that, or to decide which combinations of number of pixels and widths of field of view are appropriate. Ah, another problem handwaved away. You not only lack a clue, you are aggressive in avoiding obtaining one. I've done several iterations of this problem, though not with systems that went to full scale development or production. I believe that for suitably moderated operational requirements, the problem can be much simpler than I believe Derek thinks it is. I belive that Phil is grossly underestimating the real requirements, even for those suitably moderated operational requirements. Which requirements am I underestimating? (Bear in mind I'm considering missiles for several different roles). But few of those have progressed to production. The new Marines/Navy Spike missile is one exception, This is the Israeli ATGM, isn't it? and to some degree is the exception that probably proves the rule. Their R&D budget essentially was hidden in the slush funds at China Lake for a couple of years, and the missile itself is estimated to cost at most a few thousand dollars. And uses visual and IIR homing. -- "It's easier to find people online who openly support the KKK than people who openly support the RIAA" -- comment on Wikipedia (Email: , but first subtract 275 and reverse the last two letters). |
#3
|
|||
|
|||
![]()
phil hunt wrote:
George William Herbert wrote: I've done several iterations of this problem, though not with systems that went to full scale development or production. I believe that for suitably moderated operational requirements, the problem can be much simpler than I believe Derek thinks it is. I belive that Phil is grossly underestimating the real requirements, even for those suitably moderated operational requirements. Which requirements am I underestimating? (Bear in mind I'm considering missiles for several different roles). Let me give you an example... assume that you need a certain pixel width of an object to successfully identify it (say, 10 pixels across) with a certain contrast ratio. You also have certain limitations on the maneuverability of the airframe this is all one. It can't pull more than a certain number of G's etc. To successfully design the homing mechanism, you need to assess the distance and light or background noise conditions of the frequencies you're looking at (visual, IIR, whatever) and the magnification of the imaging system and its optical resolution. You need to have a wide enough field of view that you can see the targets as you fly along searching, but not so wide that you won't be able to discriminate a target until it's so close that maneuvering to hit it becomes a serious problem. You need to assess the impact on the sensor and field of view of the background coloration across the target areas, etc. With a much simpler system, laser spot homing, I spent some months working out that nested set of problems. Taking one shortcut made the weapon not lock on if the ballistic miss trajectory was too far off. Taking another meant that it typically locked on early in a portion of its flight that led to it flying out of control as it lost energy trying to track the laser spot as it flew out. It would scrub too much forwards velocity off early and then start to come down too short of the target and stall out trying to correct for that. Bigger lifting surfaces would solve that but cause other problems for weapon packaging. The final solution was to modify the trajectory limitations, with the more aggressive sensor system. Which scrubbed a bit off the maximum range (could still reach the old range, but if your aim was off too much in the initial firing it would just out and out miss short). You actually have to sit down, design a notional design, put a notional sensor on it, figure out what the parameters are, and simulate it for a while to see what the gotchas are. That requires models of the sensor, guidance, optics or transmitter, target behaviour, aerodynamics, and trajectory / movement dynamics of the weapon. Even getting a rough first pass of that to tell you what the roughly right answers are is nontrivial, can easily be months of work, and requires experience across a very wide range of diciplines (or a keen ability to figure out what you don't know and find it via research). But few of those have progressed to production. The new Marines/Navy Spike missile is one exception, This is the Israeli ATGM, isn't it? No, there are two missiles named Spike, and I'm referring to the US Navy / China Lake one. http://www.nawcwpns.navy.mil/~pao/pg...es/SpikeND.htm -george william herbert |
#4
|
|||
|
|||
![]()
On 21 Dec 2003 17:57:58 -0800, George William Herbert wrote:
phil hunt wrote: Which requirements am I underestimating? (Bear in mind I'm considering missiles for several different roles). Let me give you an example... assume that you need a certain pixel width of an object to successfully identify it (say, 10 pixels across) with a certain contrast ratio. You also have certain limitations on the maneuverability of the airframe this is all one. It can't pull more than a certain number of G's etc. I can imaigne a small, light wooden airframe, designed for low detectability, pulling much lower Gs than a faster airframe, which might be made of metal. To successfully design the homing mechanism, you need to assess the distance and light or background noise conditions of the frequencies you're looking at (visual, IIR, whatever) and the magnification of the imaging system and its optical resolution. You need to have a wide enough field of view that you can see the targets as you fly along searching, but not so wide that you won't be able to discriminate a target until it's so close that maneuvering to hit it becomes a serious problem. I think that examining how nature has solved similar problems is useful. The human eye has lots of closely-placed pixels at the center, and in the periphery pixels are much more widely spread. Perhaps the system could use one (or more) wide angle lenses, and a (possibly movable) telephoto lens for giving more detailed attention to an object. You need to assess the impact on the sensor and field of view of the background coloration across the target areas, etc. Human eyes have 3 colours. There no reason in principle why an artificial eye would have that number. (Though if we are using cheap hardware, it probably would). If a vehicle is stationary, and camoflaged, it's going to be a *lot* harder to spot than a moving one. I think going for the ability to spot moving vehicles well, and stationary vehicles a lot less well, is adequate performance. With a much simpler system, laser spot homing, But who shines the laser on the right spot? Or are you assuming there's a human with a laser designator in the loop? I spent some months working out that nested set of problems. Taking one shortcut made the weapon not lock on if the ballistic miss trajectory was too far off. Taking another meant that it typically locked on early in a portion of its flight that led to it flying out of control as it lost energy trying to track the laser spot as it flew out. I'm not with you there... could you explain? It would scrub too much forwards velocity off early and then start to come down too short of the target and stall out trying to correct for that. Because it was manouvring too much at the start? You actually have to sit down, design a notional design, put a notional sensor on it, figure out what the parameters are, and simulate it for a while to see what the gotchas are. That makes sense -- I'm sure lots of things wouldn't work right first time. That requires models of the sensor, guidance, optics or transmitter, target behaviour, aerodynamics, and trajectory / movement dynamics of the weapon. Even getting a rough first pass of that to tell you what the roughly right answers are is nontrivial, can easily be months of work, and requires experience across a very wide range of diciplines (or a keen ability to figure out what you don't know and find it via research). How much are simulated environments used in designing missile homing systems? By a simulated environment, I mean the missile software is working as it would be on the real missile, but output instead of going to control surfaces, goes to a flight simulation program, and input, instead of coming from a visula sensor (or whatever) comes from a program which simulates what the output of that sensor would be under those conditions? But few of those have progressed to production. The new Marines/Navy Spike missile is one exception, This is the Israeli ATGM, isn't it? No, there are two missiles named Spike, And two named Javelin, incidently. and I'm referring to the US Navy / China Lake one. http://www.nawcwpns.navy.mil/~pao/pg...es/SpikeND.htm I can't load that URL. -- "It's easier to find people online who openly support the KKK than people who openly support the RIAA" -- comment on Wikipedia (Email: , but first subtract 275 and reverse the last two letters). |
Thread Tools | |
Display Modes | |
|
|
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
Australia F111 to be scrapped!! | John Cook | Military Aviation | 35 | November 10th 03 11:46 PM |