Man, are all of those taken on your equipment?
I dropped the hobby in the seventies when I became discouraged over what the local club was able to produce.
I didn't think it was possible to capture images like those.
No one in the club back then could come anywhere close.
Yep. Of course over the course of those photos I was continuing to throw wads of cash at it pursing better performance. Like everything I do, I end up taking it to far and take something that should be fun and turn it into an obsessive quest. Being miserable is the only way I can truly be happy.

The whole thing started changing in the early 2000’s with the advent of quality consumer grade webcams and digital cameras. That opened up whole new techniques when combine with innovative new software techniques.
Some of those early moon shots were made with modified consumer grade webcams. It seems like so long ago, but I think the first one I used was one that pre-modified where they took off the lens and attached a eyepiece tube body. So you’d put that in and hook it to a computer and start capturing frames. (The focusing tricks are a whole separate tome. lol)
I’m sure you’ve seen the blurring, distorting atmospheric seeing effects when looking at say lunar craters. So what the technique was, you’d take a bunch of raw frames. On each frame, depending on seeing and size of the distortion bubbles, some all or none of the frame might be blurred or distorted, all the frames have high singal noise because they are short exposures on relatively dim objects. So the magic occurred once you started processing with the software.
The software could sort through the frames and grade them for sharpness using various contrast algorithms.
It would also use algorithms to recenter high constrast feature to help correct for drift in frame.
It would then show you a graph of frames high quality vs low quality. You could then choose the cut off point on what you want to throw away.
Then the magic; It would begin to mathematically stack the frames above your quality cut off point (That have been centered on a common identified high contrast feature). That does a couple of things.
Since the distortions are random, over a large number of frames, statistically, for any portion of the image, there are more frames where that portion is not distorted than where it is. So you are building up signal that begins to dominate over distortion.
The other magical thing, is that the noise pixels are also random. So the more images you stack, you average out those noise pixels. Noise decreases proportionally to the inverse square of the number of stacked frames.
When that is done you end up with an stacked image of the highest quality frames captured, that has strengthened the image signal and reduced the signal noise. With low noise, you can then apply some other fancy sharpening techniques without just sharpening noise.
Similary, around that time, high quality digital SLR were coming on the market. That allowed you to use many of the same techniques developed for planetary webcam astrophotography and apply them the larger format SLR for deep sky photography. Also by taking multiple frame of a minute or so rather than 1/25 of a second like a webcam. That allowed you to stack out electronic noise in the image from take low light images.
Also it allowed you to do stuff like throw out one image of a stack of 60 where a plane or sat crossed the frame. A Godsend. Nothing would be as annoying as having a 4 hour film exposure ruined at the end by a passing plane. LoL.
It also had the side benefit of greatly simplifying the tracking problem during a long exposure. Every scope drive has some error in it. Usually a periodic error from slight imperfect machining of gears. In the old days you watch a star at high mag in an attached guide scope through a crosshair and hold a hand controller making constant micro adjustments to correct for that error. For a whole 4 hour exposure!
But since you could program the camera to say take 100 exposures of 1 min duration 1 sec apart, any drift within that 1 min exposure could be ignored if it was small enough to not be noticable in a single exposure. The software would recenter everything later when you stack.
You ended up with a very low noise image with strong signal that could hold up to other more aggressive enhancement filters.
However there was one final glitch. One of the signals you strengthened in the stack was noise coming of the power supply of the camera. (Essentially heat) That appeared usually as a purplish halo emerging from the side of the frame. In a single frame you’d never notice it. It would be too low a level. But you increased that signal by stacking just as you did the target image.
More magic time. That same night (is best), given the ambient temp, you;d take off the camera and put on the cap and take a series of dark frames. All those would be capturing was that heat signal off the power supply (amplified by ambient tmep). You could then stack those separately to get a low noise representation of that halo, and then your astro software could mathematically subtract the stacked “dark” frame images from the stack “light” image there by leaving only the image of the intended target with the halo removed. Lol
So, long story short,

digital media (webcams and SLR) and computer software revolutionized amateur astrophotography. But I’ve been out of it for 5 years or so. I have no idea how it’s progressed since then.
Further reading:
https://skyandtelescope.org/astronomy-resources/how-to-process-planetary-images/This is software I used a lot mostly for deep sky stuff:
Images PlusWebcam/CCD stacking
https://www.astronomie.be/registax/