How do I create a function to process each millisecond? - page 18

 
Taras Slobodyanik:

Er, so it just appears by itself? Out of nowhere?

The question was, where does 50 half-frames come from?
How does 25 become 50? (We have 25 as standard, don't we?).
And why, then,can't a whole frame be easily made out of that?

Why is it impossible to make up a whole frame? Why is it not easy? It's probably easy and simple, the whole system is designed that way. There is an interlaced signal stream, there is the same interlaced drawing on the screen. No problem.

50 half frames = 25 frames divided in half.

I wrote above:There are frames where a whole frame from the video is displayed, and in between, half from one frame and half from the other. A sort of seamless transition from one frame to the next. A sort of interlaced transit.

 
Dmitry Fedoseev:

Why is it impossible to make up a whole frame? Why is it not easy? It's probably easy and simple, that's how the whole system is designed. The signal stream goes interlaced, the same interlaced drawing goes on the screen. No problem.

50 half frames = 25 frames divided in half.

I wrote above:There are frames where a whole frame from the video is displayed, and in between, half from one frame and half from the other. A sort of seamless transition from one frame to the next. Sort of an interlaced transit.

Impossible, because each half-frame is different (now I'm like a kindergartner))
Each half-frame from a different moment in time - the moving object, in all 50 half-frames, has a different location. That's why they come up with a bunch of algorithms to recreate full frames and downscale to 25 frames. In doing so, half of the visual information is thrown away. Unless you convert to the same 50fps.

So all television (past and present) that has interlacing is shot at 50 frames (or half-frames).
And it all has the smoothness of 50 frames, but not 25.

ps. The same goes for the sports channels, they're all interlaced - because that provides minimal fluidity. There are 50 half-frames per second, in the same stream as 25 full frames.

 

No. It's not like that. Nothing is discarded. As it was 25, so it remains, only the same number of transitional frames are added to them, but already in the monitor (created in the monitor, so to speak). One video half-frame is kept on the screen during two monitor frames.

Let's imagine that the frames lie in the file in order in the right order: 1, 2, 3, 4... It doesn't matter at all how they are laid down - what is important is how we read them.

Read the even numbered lines of frame 1 and send, read the odd numbered lines of frame 2, send, read the even numbered lines of frame 2 and send, read the odd numbered lines of frame 3 and send, read the even numbered lines of frame 3 and send.

Now in the monitor.

We received the even-numbered lines of frame 1 and drew it. New frame, the even lines of the frame 1 are still lit, but the process of receiving and displaying the odd lines of the frame 2 is going on. As a result, the monitor has a frame that has half of the lines from frame 1, and half of the lines from frame 2. Next frame of the monitor, the odd-numbered lines (from frame 2) are still lit up and the even-numbered lines fade away; we get the even-numbered lines from frame 2 and display them on the monitor - the monitor has a frame which consists of two halves of frame 2, which is a normal frame. Next frame on the monitor continues to display the even numbered lines of frame 2 and adds the odd numbered lines of frame 3 - we have a transitional frame which consists of half of frame 2 and half of frame 3. Next is the whole frame 3. Next frame is half of frame 3 and half of frame 5. Then a whole 4. Half of 4 and half of 5. Whole 5. Half 5 and half 6. Whole 6. Half 6 and 7. Whole 7...

 
Taras Slobodyanik:

er, so it just appears by itself? out of nowhere?

The question was, where does 50 half-frames come from?
How does 25 translate into 50? (After all, we have a standard of 25?)
And why, then,can't a whole frame be made up easily from that?

from where the soviet SEKAM was copied from the french MESECAM, it is a stupid tv broadcasting standard which has in its structure a PROTOCOL order of transmission of analogue signal - now the word is PROTOCOL

even lines are transmitted first, then odd lines containing video information, then a delayed audio signal and all this has to fit in 1/50th of a second, then a new one....

this standard was consistent with the first video cameras which also captured video information from the photoelectric sensor in lines...... this is the beginning of the story.... that 50Hz and 2 passes of the kinescope beam started the era of television....

then the chips were turned, but the standard was the same

about de-interlacing I said half a day yesterday in this thread, this effect occurs when the frequency of the video source - CAMERA!!! does not match the frequency of the video receiver BUT this TV!!! and different ways to match these two devices, if the pairing was not good, then you will see sometimes lagging / jerking ... it's all compensation for missed frames

There are different ways of pairing, conversion - digital processing and electronic circuits... Never mind, the important thing is to understand that existing equipment has different data processing frequencies, photo cells in professional camcorders shoot information from a sensor at several MHz, and output that information to a video stream, which then connects to broadcasting equipment (analogue SECAM, digital DVB-T2) which operates at 50Hz, if you're wrong-handed engineers, video editors, then you will see the resulting distortion, if done professionally, you will never guess that the original video material was manipulated in any way


all this talk about monitor refresh rates and 120Hz and... But the focus is not on the refresh rate, there are a lot of factors that play a major role, like pixel response time, black balance, colour reproduction, grain... You have to understand that video on TV is a manipulation!!! With static pictures and if you try hard enough (so hard that you fart!) you may not see distortions, but if you look for them hard, you will see them... That's how consciousness works

 
Реter Konow:

I want to remind: the original question was "Does it make sense to increase OnTimer() frequency above 40 milliseconds, if human image change rate is 24 frames per second?".

this thread is full of cholivar.

I didn't want to yesterday, but I googled some of the information and here it was on Wiki

  • 16 is the standard shooting and projection rate of silent cinema;
  • 18 is the standard shooting and projection frequency of the amateur format "8 Super";
  • 23.976 (24×1000÷1001) isthe American525/60 telecine projection frequency used for lossless interpolation;
  • 24 is the worldwide standard for film and projection frequencies;

ie at least 16-18 frames will give an animation effect, the same GIF-ki if you are not picky about the quality and the video is not very dynamic, it is quite "lookable", here's a short article on habrehabre, the trick is not in frequency, and the choice of frames

https://habr.com/post/251709/

 
Igor Makanu:

the topic is a total hullabaloo.

I didn't want to yesterday, but I googled some of the information, the Wiki had

  • 16 is the standard shooting and projection rate of silent cinema;
  • 18 is the standard shooting and projection frequency of the amateur format "8 Super";
  • 23.976 (24×1000÷1001) is the telecine projection frequency in the American525/60decomposition standard used for lossless interpolation;
  • 24 is the worldwide standard for film and projection frequencies;

i.e. at least 16-18 frames will produce an animation effect, the same GIFs, if you're not picky about quality and the video is not very dynamic, it's quite "lookable", here's a short article on habrehabre, the trick is not in frequency, and the choice of frames

https://habr.com/post/251709/

There's definitely a lot of confusion going on here.

Apparently, about 24 frames per second is the maximum number of frames the brain can perceive and process. The eye, without reference to the brain, (as mentioned here) can "photograph" more frames per second. But what difference does it make if the brain cannot process them all?

So I suggested to those arguing to do a simple script and check at what frame rate the change in the quality of smoothness and naturalness of the image will cease to be noticeable.


It also seems to me that there is a confusion between picture quality - naturalness, colour saturation, lack of flicker, graininess, etc... and frame rate, which only affects the perception of movement within the image.

 
Реter Konow:

I also think there's a confusion about picture quality - naturalness, colour saturation, lack of flicker, graininess, etc... and frame rate, which only affects the perception of movement within the image.

Yes, that's exactly the point, and any test will be a subjective assessment, especially since everyone has different monitors, and the script will probably be still tied to the performance of PC and event model of MT and Windows

codecs in Windows should have the highest priority and work directly with video card drivers (well, almost so), and in MT there may be some slowdowns, but imho, all this is subjective

 
Igor Makanu:

Yes, that's exactly the point, and any test will be a subjective assessment, especially since everyone has different monitors, and the script will probably be still tied to the performance of the PC and the event model of MT and Windows

codecs in Windows should have the highest priority and work directly with video card drivers (well, almost so), and in MT may be lags, but imho, all this is subjective

The script is very simple. PC performance does not change anything. There will be little room for subjectivity here, precisely because of the simplicity of the test. Just a figure on a black background and movement from one point to another (1000 pixels) per second. The idea is that by increasing the frame rate from one we will gradually increase the fluidity of the object, but at a certain point, we will stop noticing the fluidity change. The figure at which we stop will be the limit of perception. It may be more than 24 frames, but I'm not sure by much.

 
Реter Konow:

You're a strange conversationalist. As the saying goes, "Who's talking about what, but... about the bathhouse."

That's not what we're talking about here...

I gave you an analogy, because everyone is clever, but no one wants to learn and read scientific arguments.

Moreover, no one gives them.

 
jdjahfkahjf:

I gave you the analogy because, everyone is being clever, but no one wants to learn and read the scientific arguments.

Moreover, no one gives them.

You know, you give the impression of a very intelligent person. I liked your arguments.

However, I wouldn't agree that "everyone here is being clever". Some participants in the discussion seem to be very prepared on the subject.

Reason: