• Thanks for stopping by. Logging in to a registered account will remove all generic ads. Please reach out with any questions or concerns.

C3 Howitzer Replacement

I am always surprised why a M109 turret was never adapted for use on a HEMTT - sure it would need a riser block on the rear to make room for crew - but it would have to be an easier more complete way to field a system that has significant commonality.

These days, I believe, it is better to get the crew out of the "turret" completely. Like the Archer programme.
 
These days, I believe, it is better to get the crew out of the "turret" completely. Like the Archer programme.
Perhaps - I have an inherent distrust in technology - so anything I also want the ability to verify via a glass etched reticle - that goes from an optical sight on a small arm - to a howitzer.
Given Hostile State EW programs, my belief is a gunner still needs to be able to lay and fire a gun manually if needed.
 
Perhaps - I have an inherent distrust in technology - so anything I also want the ability to verify via a glass etched reticle - that goes from an optical sight on a small arm - to a howitzer.
Given Hostile State EW programs, my belief is a gunner still needs to be able to lay and fire a gun manually if needed.

I'll go half-way. I believe there is still a need for manually operated systems. But when you get to a certain level of scale it is cheaper to buy two completely different weapons, some that are fully automated and some that are manual, than it is to create a single system that can be both.

The one thing we can both agree on is the need for the man in the loop. That is why I still prefer the notion of four tanks with one crewman and an AI RWS turret each than a conventional tank with 4 crewman and 3 uninhabited wingmen. Same number of tankers, same number of guns.
 
I think that there is a crossover point where you simply have to trust that the engineering gives it the level of security and operability to be as close to failsafe as you can make it.

If you take an Archer (or most Russian tanks for that matter) there simply is no physical room for the requisite backup humans it would need nor in fact the machinery to allow a backup manual override, nor for that matter, provision for those backup humans.

I have the same general problem as you though. My basic gunnery training worked very much on the basis of "double checking" data at all stages of an engagement and to have a redundant "reserve" capability of equipment and people for when systems inevitably fail or are taken out of action and need replacement.

🍻
 
I know it's air defence but I for one think that, in part, a number of C3s in the Reserves should be replaced by air defence. So in keeping with that, the latest from the new US M-SHORAD in Europe:



🍻
Replacing an easy to use, yet non-deplorable gun system with an easy to use, yet useful AD system? 😤

Because modern systems have pretty advanced capabilities, and have literally been designed for a video game generation (who can learn to operate the system in a day) - your idea just makes sense.

It would also be a ‘safe’ plan, if the reserves continue to only provide individuals or sub-units in the future.


0.02 🍻
 
I think that there is a crossover point where you simply have to trust that the engineering gives it the level of security and operability to be as close to failsafe as you can make it.

If you take an Archer (or most Russian tanks for that matter) there simply is no physical room for the requisite backup humans it would need nor in fact the machinery to allow a backup manual override, nor for that matter, provision for those backup humans.
I'll be honest my concerns are multi-fold.
1) Human backup due to Enemy Jamming, EMP etc

2) Distrust of electronics (those new batteries you put in your NVG battery pack just decide to crap out half way through an op)

3) Absolute distrust of AI systems - the leaps and bounds we have "made" in AI and machine learning is staggering, where a system can generate COA's and reactions faster than a real time team of humans.

4) Concern that a certain point war will be waged via autonomous systems - the bloodless war, that then diminishes the cost - and desensitizes societies until it expands to a point that cannot be contained -- also the whole Skynet is Live... ;)

I have the same general problem as you though. My basic gunnery training worked very much on the basis of "double checking" data at all stages of an engagement and to have a redundant "reserve" capability of equipment and people for when systems inevitably fail or are taken out of action and need replacement.

🍻
Humans are fallible - but at least we try.
 
I'll be honest my concerns are multi-fold.
1) Human backup due to Enemy Jamming, EMP etc

2) Distrust of electronics (those new batteries you put in your NVG battery pack just decide to crap out half way through an op)

3) Absolute distrust of AI systems - the leaps and bounds we have "made" in AI and machine learning is staggering, where a system can generate COA's and reactions faster than a real time team of humans.

4) Concern that a certain point war will be waged via autonomous systems - the bloodless war, that then diminishes the cost - and desensitizes societies until it expands to a point that cannot be contained -- also the whole Skynet is Live... ;)


Humans are fallible - but at least we try.
All of which is why I like an on board human with a hardwired kill switch - ie to kill R2D2 when he stops cooperating in the backseat.

Also the driver gets his own Nintendo game pad to control the gun.
 
I'll be honest my concerns are multi-fold.
1) Human backup due to Enemy Jamming, EMP etc

2) Distrust of electronics (those new batteries you put in your NVG battery pack just decide to crap out half way through an op)

3) Absolute distrust of AI systems - the leaps and bounds we have "made" in AI and machine learning is staggering, where a system can generate COA's and reactions faster than a real time team of humans.

4) Concern that a certain point war will be waged via autonomous systems - the bloodless war, that then diminishes the cost - and desensitizes societies until it expands to a point that cannot be contained -- also the whole Skynet is Live... ;)


Humans are fallible - but at least we try.
Couldn’t agree more with every single one of these points.

1) Just look at the Russian capabilities demonstrated in the Ukraine, in which the US Army described them as ‘eye watering’ - and denied the Ukrainians the ability to communicate, move in the open, transmit anything, and even had to rethink the use of basic electronic gadgets.

Having a human that can control something if/when the enemy scrambles it’s brains or ability to receive directions is a good idea.

Unless both sides are using primarily autonomous machines that transmit clearly IFF codes, humans need to decide when to pull the trigger and when not to.


2). Especially military electronics. Especially battery powered anything in cold temperatures for extended periods.

(Or when it comes to individual kit, the muppet who had those NVGs before you did neglected their care, and now somehow the batteries die fast and the depth perception is off…)


3) If I had to choose 3 ‘likely or most likely things to destroy society as we know it’ - AI would very much be included.

We have arrived at a point of growth, as an overall species, where we are actively going down 2 different, yet incompatible paths at the same time - and most people don’t even realize it.

On the one hand we tend to be on a quest to do whatever we can, just because we think we figured out how. The question of “why?” is rarely answered adequately.

Yet on the other hand we are deliberately playing at becoming Gods, even though we all know we are far from being an apex of the universe.

We are deliberately creating, via algorithms and circuits, an intelligence that is self aware, capable of growing and learning faster than we can fathom, and can think/reason for itself in a way that is alien to us.

Limited AI to take certain tasks away from a human operator? Sure.

Full self-actualized AI? Seems like really bad idea.


4. Agreed. Human tragedy has to be part of the human experience, and the decision to inflict violence on humans living elsewhere on the planet needs to come with a cost not measured in dollars.

Otherwise, it’s just a real life video game.
 
We have arrived at a point of growth, as an overall species, where we are actively going down 2 different, yet incompatible paths at the same time - and most people don’t even realize it.

On the one hand we tend to be on a quest to do whatever we can, just because we think we figured out how. The question of “why?” is rarely answered adequately.

Yet on the other hand we are deliberately playing at becoming Gods, even though we all know we are far from being an apex of the universe.

We are deliberately creating, via algorithms and circuits, an intelligence that is self aware, capable of growing and learning faster than we can fathom, and can think/reason for itself in a way that is alien to us.
That is an outstanding observation -- I hadn't really looked at it like that.
 
3) If I had to choose 3 ‘likely or most likely things to destroy society as we know it’ - AI would very much be included.

We have arrived at a point of growth, as an overall species, where we are actively going down 2 different, yet incompatible paths at the same time - and most people don’t even realize it.

On the one hand we tend to be on a quest to do whatever we can, just because we think we figured out how. The question of “why?” is rarely answered adequately.

Yet on the other hand we are deliberately playing at becoming Gods, even though we all know we are far from being an apex of the universe.

We are deliberately creating, via algorithms and circuits, an intelligence that is self aware, capable of growing and learning faster than we can fathom, and can think/reason for itself in a way that is alien to us.

Limited AI to take certain tasks away from a human operator? Sure.

Full self-actualized AI? Seems like really bad idea.
I agree. There's a few movies that deal with AI - Terminator series and at least one episode of the original Star Trek.
 
Best takeaway from the article is of course in the comments section where some point out it is the Swedish-based Bofors Division of BAE, and that the British side of the business would make a mess of it:

“agreed we could take a spade and what it to turn into a hang glider

just beacuse”
 
Chinese options in 122mm
Maybe we could quietly order 30 from WISH?

pcl-171-image06.jpg
 
I kind of like Archer - at least the concept.

My concerns revolve around the limited internal magazine capability of 21 rounds. After that it needs to go for a reload which is a fairly deliberate process which is done by people who are not under armour when doing it. The whole thing requires a new way of doing artillery support which would constantly have guns moving into and out of action. They talk about 30 seconds from getting into and out of action but that does not include the travel time from the hide to the firing platform and subsequently to the resupply point.

I'm left with the question of how many of these guns one would need to provide any degree of guaranteed continuous fire support in a high intensity conflict. A Msta by contrast carries some 50 rds on board. The Paladin carries 39 rounds with another 90 in its ammo limber M992 which can replenish under armour.

As far as its Nora competitor, it holds a 36 round autoloader magazine (albeit only 12 are in the "ready magazine". It's resupply vehicle is basically unarmoured. From what I understand the "ready magazine" needs manual reloading and the gun is in fact capable of total manual loading from inside the armoured turret.

It's truly one of those things where you need to put your hands on the kit to see what it's limitations and capabilities are. I'll be interested in seeing the US Army's evaluations when they come out.

🍻
 
I kind of like Archer - at least the concept.

My concerns revolve around the limited internal magazine capability of 21 rounds.
Snip
The Paladin carries 39 rounds with another 90 in its ammo limber M992 which can replenish under armour.
I would think it would be relatively easy to adapt Archer to a UA Reload - and a Stryker variant ammo limber in that role.
 
I would think it would be relatively easy to adapt Archer to a UA Reload - and a Stryker variant ammo limber in that role.
From the video I've seen of the Archer, the loading is done on a platform extending out of the ammo limber to the side of the gun where a relatively light hatch opens up to provide access for loading - it is not like the protected turret of the Paladin. And in fairness many, if not most of the US arty bns, remove the transfer conveyor from the M992 in favour of backing the M992 close to the back of the Paladin and passing the rounds and powder through the unarmoured space between the two vehicles.

I imagine one could design an armoured load bed with platform that has a decent level of ballistic protection around it. Betcha its not a high priority on anyone's agenda. Nonetheless the low number of on-board rounds severely limits the weight of fire that one gun can produce before it has to go out of action for a reload. That essentially means juggling the mission to another gun or guns. That's quite possible and that's what artillery is all about--massing fires across the AO--but still a limitation that needs to be factored in.

🍻
 
Back
Top