쉐도잉 연습: Hilbert's Curve: Is infinite math useful? - YouTube로 영어 말하기 배우기

어려움
쉐도잉 컨트롤
0% 완료 (0/119 문장)
Let's talk about space-filling curves.
⏸ 일시 정지
재생 속도:
반복 횟수:
대기 모드:
자막 동기:0ms
모든 문장
119 문장
1
Let's talk about space-filling curves.
0:04.14 0:06.00 (1.9s)
2
They are incredibly fun to animate, and they also give a chance to address a certain philosophical question.
0:06.42 0:11.22 (4.8s)
3
Math often deals with infinite quantities, sometimes so intimately that the very substance of a result only actually makes sense in an infinite world.
0:11.82 0:20.18 (8.4s)
4
So the question is, how can these results ever be useful in a finite context?
0:20.94 0:25.68 (4.7s)
5
As with all philosophizing, this is best left to discuss until after we look at the concrete case and the real math.
0:26.66 0:32.64 (6.0s)
6
So I'll begin by laying down an application of something called a Hilbert curve, followed by a description of some of its origins in infinite math.
0:33.24 0:40.98 (7.7s)
7
Let's say you wanted to write some software that would enable people to see with their ears.
0:44.52 0:49.20 (4.7s)
8
It would take in data from a camera, and then somehow translate that into a sound in a meaningful way.
0:49.90 0:56.06 (6.2s)
9
The thought here is that brains are plastic enough to build an intuition from sight even when the raw data is scrambled into a different format.
0:56.90 1:04.08 (7.2s)
10
I've left a few links in the description to studies to this effect.
1:04.80 1:07.68 (2.9s)
11
To make initial experiments easier, you might start by treating incoming images with a low resolution, maybe 256 by 256 pixels.
1:08.30 1:16.48 (8.2s)
12
And to make my own animation efforts easier, let's represent one of these images with a square grid, each cell corresponding with a pixel.
1:17.34 1:24.24 (6.9s)
13
One approach to this sound-to-sight software would be to find a nice way to associate each one of those pixels with a unique frequency value.
1:25.08 1:34.14 (9.1s)
14
Then when that pixel is brighter, the frequency associated with it would be played louder, and if the pixel were darker, the frequency would be quiet.
1:35.02 1:42.40 (7.4s)
15
Listening to all of the pixels all at once would then sound like a bunch of frequencies overlaid on top of one another, with dominant frequencies corresponding to the brighter regions of the image sounding like some cacophonous mess until your brain learns to make sense out of the information it contains.
1:43.40 2:00.74 (17.3s)
16
Let's temporarily set aside worries about whether or not this would actually work, and instead think about what function, from pixel space down to frequency space, gives this software the best chance of working.
2:01.90 2:13.48 (11.6s)
17
The tricky part is that pixel space is two-dimensional, but frequency space is one-dimensional.
2:14.50 2:20.28 (5.8s)
18
You could, of course, try doing this with a random mapping.
2:21.66 2:25.10 (3.4s)
19
After all, we're hoping that people's brains make sense out of pretty wonky data anyway.
2:25.70 2:29.60 (3.9s)
20
However, it might be nice to leverage some of the intuitions that a given human brain already has about sound.
2:30.40 2:36.30 (5.9s)
21
For example, if we think in terms of the reverse mapping from frequency space to pixel space, frequencies that are close together should stay close together in the pixel space.
2:36.96 2:47.26 (10.3s)
22
That way, even if an ear has a hard time distinguishing between two nearby frequencies, they will at least refer to the same basic point in space.
2:47.70 2:56.32 (8.6s)
23
To ensure this happens, you could first describe a way to weave a line through each one of these pixels.
2:57.40 3:03.22 (5.8s)
24
Then if you fix each pixel to a spot on that line and unravel the whole thread to make it straight, you could interpret this line as a frequency space, and you have an association from pixels to frequencies.
3:04.22 3:17.94 (13.7s)
25
One weaving method would be to just go one row at a time, alternating between left and right as it moves up that pixel space.
3:19.84 3:26.98 (7.1s)
26
This is like a well-played game of Snake, so let's call this a Snake Curve.
3:27.78 3:31.40 (3.6s)
27
When you tell your mathematician friend about this idea, she says, why not use a Hilbert curve?
3:32.60 3:37.46 (4.9s)
28
When you ask her what that is, she stumbles for a moment.
3:38.22 3:40.60 (2.4s)
29
So it's not a curve, but an infinite family of curves.
3:41.22 3:44.38 (3.2s)
30
She starts, well no, it's just one thing, but I need to tell you about a certain infinite family first.
3:44.38 3:50.54 (6.2s)
31
She pulls out a piece of paper and starts explaining what she decides to call pseudo-Hilbert curves, for lack of a better term.
3:51.12 3:57.74 (6.6s)
32
For an order-one pseudo-Hilbert curve, you divide a square into a 2x2 grid, and connect the center of the lower left quadrant to the center of the upper left, over to the upper right, and then down in the lower right.
3:58.32 4:12.06 (13.7s)
33
For an order-two pseudo-Hilbert curve, rather than just going straight from one quadrant to another, we let our curve do a little work to fill out each quadrant while it does so.
4:12.62 4:22.54 (9.9s)
34
Specifically, subdivide the square further into a 4x4 grid, and we have our curve trace out a miniature order-one pseudo-Hilbert curve inside each quadrant before it moves on to the next.
4:23.06 4:34.64 (11.6s)
35
If we left those mini-curves oriented as they are, going from the end of the mini-curve in the lower left to the start of the mini-curve in the upper left requires an awkward jump, same deal with going from the upper right down to the lower right, so we flip the curves in the lower left and lower right to make that connection shorter.
4:35.52 4:53.58 (18.1s)
36
Going from an order-two to an order-three pseudo-Hilbert curve is similar.
4:54.78 4:58.78 (4.0s)
37
You divide the square into an 8x8 grid, then put an order-two pseudo-Hilbert curve in each quadrant, flip the lower left and lower right appropriately, and connect them all tip to tail.
4:59.46 5:11.22 (11.8s)
38
And the pattern continues like that for higher orders.
5:12.10 5:14.78 (2.7s)
39
For the 256x256 pixel array, your mathematician friend explains, you would use an order-eight pseudo-Hilbert curve.
5:22.02 5:30.14 (8.1s)
40
And remember, defining a curve which weaves through each pixel is basically the same as defining a function from pixel space to frequency space, since you're associating each pixel with a point on the line.
5:31.00 5:44.06 (13.1s)
41
Now this is nice as a piece of art, but why would these pseudo-Hilbert curves be any better than just the snake curve?
5:45.44 5:51.54 (6.1s)
42
Well here's one very important reason.
5:52.46 5:54.38 (1.9s)
43
Imagine that you go through with this project, you integrate the software with real cameras and headphones, and it works!
5:54.96 6:00.64 (5.7s)
44
People around the world are using the device, building intuitions for vision via sound.
6:01.10 6:05.36 (4.3s)
45
What happens when you issue an upgrade that increases the resolution of the camera's image from 256x256 to 512x512?
6:06.20 6:15.30 (9.1s)
46
If you were using the snake curve, as you transition to a higher resolution, many points on this frequency line would have to go to completely different parts of pixel space.
6:16.58 6:26.56 (10.0s)
47
For example, let's follow a point about halfway along the frequency line.
6:27.19 6:30.90 (3.7s)
48
It'll end up about halfway up the pixel space, no matter the resolution, but where it is left to right can differ wildly as you go from 256x256 up to 512x512.
6:31.56 6:42.50 (10.9s)
49
This means everyone using your software would have to re-learn how to see with their ears, since the original intuitions of which points in space correspond to which frequencies no longer apply.
6:42.92 6:53.72 (10.8s)
50
However, with the Hilbert curve technique, as you increase the order of a pseudo-Hilbert curve, a given point on the line moves around less and less, it just approaches a more specific point in space.
6:54.72 7:08.30 (13.6s)
51
That way, you've given your users the opportunity to fine-tune their intuitions, rather than re-learning everything.
7:09.52 7:16.00 (6.5s)
52
So, for this sound-to-sight application, the Hilbert curve approach turns out to be exactly what you want.
7:19.46 7:25.22 (5.8s)
53
In fact, given how specific the goal is, it seems almost weirdly perfect.
7:26.22 7:31.52 (5.3s)
54
So you go back to your mathematician friend and ask her, what was the original motivation for defining one of these curves?
7:32.22 7:38.54 (6.3s)
55
She explains that near the end of the 19th century, in the aftershock of Cantor's research on infinity, mathematicians were interested in finding a mapping from a one-dimensional line into two-dimensional space in such a way that the line runs through every single point in space.
7:39.74 7:55.24 (15.5s)
56
To be clear, we're not talking about a finite bounded grid of pixels, like we had in the sound-to-sight application.
7:56.24 8:01.98 (5.7s)
57
This is continuous space, which is very infinite, and the goal is to have a line which is as thin as can be and has zero area, somehow pass through every single one of those infinitely many points that makes up the infinite area of space.
8:02.68 8:18.38 (15.7s)
58
Before 1890, a lot of people thought this was obviously impossible, but then Peano discovered the first of what would come to be known as space-filling curves.
8:19.68 8:29.24 (9.6s)
59
In 1891, Hilbert followed with his own slightly simpler space-filling curve.
8:30.18 8:34.40 (4.2s)
60
Technically, each one fills a square, not all of space, but I'll show you later on how once you filled a square with a line, filling all of space is not an issue.
8:35.40 8:43.52 (8.1s)
61
By the way, mathematicians use the word curve to talk about a line running through space even if it has jagged corners.
8:44.62 8:51.40 (6.8s)
62
This is especially counterintuitive terminology in the context of a space-filling curve, which in a sense consists of nothing but sharp corners.
8:52.20 9:00.32 (8.1s)
63
A better name might be something like space-filling fractal, which some people do use, but hey, it's math, so we live with bad terminology.
9:00.86 9:08.84 (8.0s)
64
None of the pseudo-Hilbert curves that you use to fill pixelated space would count as a space-filling curve, no matter how high the order.
9:10.36 9:17.56 (7.2s)
65
Just zoom in on one of the pixels.
9:18.48 9:20.20 (1.7s)
66
When this pixel is considered part of infinite, continuous space, the curve only passes through the tiniest zero-area slice of it, and it certainly doesn't hit every point.
9:20.94 9:31.72 (10.8s)
67
Your mathematician friend explains that an actual bonafide Hilbert curve is not any one of these pseudo-Hilbert curves.
9:33.42 9:40.14 (6.7s)
68
Instead it's the limit of all of them.
9:40.82 9:42.56 (1.7s)
69
Defining this limit rigorously is delicate.
9:43.70 9:46.68 (3.0s)
70
You first have to formalize what these curves are as functions, specifically functions which take in a single number somewhere between 0 and 1 as their input, and output a pair of numbers.
9:47.42 9:58.72 (11.3s)
71
This input can be thought of as a point on the line, and the output can be thought of as coordinates in 2D space.
9:59.60 10:05.06 (5.5s)
72
But in principle it's just an association between a single number and pairs of numbers.
10:05.48 10:10.32 (4.8s)
73
For example, an order-2 pseudo-Hilbert curve as a function maps the input 0.3 to the output pair 0.125, 0.75.
10:11.28 10:21.64 (10.4s)
74
An order-3 pseudo-Hilbert curve maps that same input 0.3 to the output pair 0.0758, 0.6875.
10:22.58 10:31.82 (9.2s)
75
Now the core property that makes a function like this a curve, and not just any ol' association between single numbers and pairs of numbers, is continuity.
10:33.14 10:42.30 (9.2s)
76
The intuition behind continuity is that you don't want the output of your function to suddenly jump at any point when the input is only changing smoothly.
10:43.66 10:52.00 (8.3s)
77
And the way this is made rigorous in math is actually pretty clever, and fully appreciating space-filling curves requires digesting the formal idea of continuity, so it's definitely worth taking a brief side-step to go over it now.
10:52.82 11:07.38 (14.6s)
78
Consider a particular input point, a, and the corresponding output of the function, b.
11:08.34 11:14.16 (5.8s)
79
Draw a circle centered around a, and look at all the other input points inside that circle, and consider where the function takes all those points in the output space.
11:15.14 11:26.06 (10.9s)
80
Now draw the smallest circle you can centered at b that contains those outputs.
11:27.06 11:32.16 (5.1s)
81
Different choices for the size of the input circle might result in larger or smaller circles in the output space.
11:33.24 11:39.92 (6.7s)
82
But notice what happens when we go through this process at a point where the function jumps, drawing a circle around a, and looking at the input points within the circle, seeing where they map, and drawing the smallest possible circle centered at b containing those points.
11:40.70 11:57.62 (16.9s)
83
No matter how small the circle around a, the corresponding circle around b just cannot be smaller than that jump.
11:58.54 12:05.94 (7.4s)
84
For this reason, we say that the function is discontinuous at a if there's any lower bound on the size of this circle that surrounds b.
12:07.34 12:16.18 (8.8s)
85
If the circle around b can be made as small as you want, with sufficiently small choices for circles around a, you say that the function is continuous at a.
12:17.46 12:26.52 (9.1s)
86
A function as a whole is called continuous if it's continuous at every possible input point.
12:27.34 12:32.16 (4.8s)
87
Now with that as a formal definition of curves, you're ready to define what an actual Hilbert curve is.
12:32.98 12:39.06 (6.1s)
88
Doing this relies on a wonderful property of the sequence of pseudo-Hilbert curves, which should feel familiar.
12:40.02 12:46.40 (6.4s)
89
Take a given input point, like 0.3, and apply each successive pseudo-Hilbert curve function to this point.
12:47.40 12:54.22 (6.8s)
90
The corresponding outputs, as we increase the order of the curve, approaches some particular point in space.
12:55.06 13:01.32 (6.3s)
91
It doesn't matter what input you start with, this sequence of outputs you get by applying each successive pseudo-Hilbert curve to this point always stabilizes and approaches some particular point in 2D space.
13:02.34 13:14.06 (11.7s)
92
This is absolutely not true, by the way, for snake curves, or for that matter most sequences of curves filling pixelated space of higher and higher resolutions.
13:15.34 13:23.76 (8.4s)
93
The outputs associated with a given input become wildly erratic as the resolution increases, always jumping from left to right, and never actually approaching anything.
13:24.37 13:34.64 (10.3s)
94
Now because of this property, we can define a Hilbert curve function like this.
13:35.90 13:40.38 (4.5s)
95
For a given input value between 0 and 1, consider the sequence of points in 2D space you get by applying each successive pseudo-Hilbert curve function at that point.
13:41.04 13:50.88 (9.8s)
96
The output of the Hilbert curve function evaluated on this input is just defined to be the limit of those points.
13:51.42 13:59.00 (7.6s)
97
Because the sequence of pseudo-Hilbert curve outputs always converges no matter what input you start with, this is actually a well-defined function in a way that it never could have been had we used snake curves.
14:00.38 14:11.94 (11.6s)
98
Now I'm not going to go through the proof for why this gives a space-filling curve, but let's at least see what needs to be proved.
14:13.44 14:19.34 (5.9s)
99
First, verify that this is a well-defined function by proving that the outputs of the pseudo-Hilbert curve functions really do converge the way I'm telling you they do.
14:19.34 14:28.86 (9.5s)
100
Second, show that this function gives a curve, meaning it's continuous.
14:29.40 14:33.98 (4.6s)
101
Third, and most important, show that it fills space, in the sense that every single point in the unit square is an output of this function.
14:35.14 14:43.66 (8.5s)
102
I really do encourage anyone watching this to take a stab at each one of these.
14:44.58 14:48.36 (3.8s)
103
Spoiler alert, all three of these facts turn out to be true.
14:48.88 14:51.86 (3.0s)
104
You can extend this to a curve that fills all of space just by tiling space with squares and then chaining a bunch of Hilbert curves together in a spiraling pattern of tiles, connecting the end of one tile to the start of a new tile with an added little stretch of line if you need to.
14:53.66 15:08.56 (14.9s)
105
You can think of the first tile as coming from the interval from 0 to 1, the second tile as coming from the interval from 1 to 2, and so on, so the entire positive real number line is getting mapped into all of 2D space.
15:09.66 15:24.62 (15.0s)
106
Take a moment to let that fact sink in.
15:25.42 15:27.32 (1.9s)
107
A line, the platonic form of thinness itself, can wander through an infinitely extending and richly dense space and hit every single point.
15:27.66 15:38.20 (10.5s)
108
Notice, the core property that made pseudo-Hilbert curves useful in both the sound-to-sight application and in their infinite origins is that points on the curve move around less and less as you increase the order of those curves.
15:43.24 15:57.86 (14.6s)
109
While translating images to sound, this was useful because it means upgrading to higher resolutions doesn't require retraining your senses all over again.
15:58.78 16:06.94 (8.2s)
110
For mathematicians interested in filling continuous space, this property is what ensured that talking about the limit of a sequence of curves was a meaningful thing to do.
16:07.46 16:18.18 (10.7s)
111
And this connection here between the infinite and finite worlds seems to be more of a rule in math than an exception.
16:19.06 16:25.14 (6.1s)
112
Another example that several astute commenters on the Inventing Math video pointed out is the connection between the divergent sum of all powers of 2 and the way that the number of 1 is represented in computers with bits.
16:26.02 16:38.58 (12.6s)
113
It's not so much that the infinite result is directly useful, but instead the same patterns and constructs that are used to define and prove infinite facts have finite analogs, and these finite analogs are directly useful.
16:39.58 16:54.12 (14.5s)
114
But the connection is often deeper than a mere analogy.
16:55.10 16:57.60 (2.5s)
115
Many theorems about an infinite object are often equivalent to some theorem regarding a family of finite objects.
16:58.28 17:05.38 (7.1s)
116
For example, if during your sound-to-sight project you were to sit down and really formalize what it means for your curve to stay stable as you increase camera resolution, you would end up effectively writing the definition of what it means for a sequence of curves to have a limit.
17:06.28 17:22.46 (16.2s)
117
In fact, a statement about some infinite object, whether that's a sequence or a fractal, can usually be viewed as a particularly clean way to encapsulate a truth about a family of finite objects.
17:23.40 17:36.28 (12.9s)
118
The lesson to take away here is that even when a statement seems very far removed from reality, you should always be willing to look under the hood and at the nuts and bolts of what's really being said.
17:37.48 17:47.74 (10.3s)
119
Who knows, you might find insights for representing numbers from divergent sums, or for seeing with your ears from filling space.
17:48.48 17:54.90 (6.4s)

이 레슨에 대해

"Hilbert's Curve: Is infinite math useful?"으로 쉐도잉 기법을 사용해 영어를 연습합니다.

매일 15~30분 꾸준히 연습하면 IELTS 스피킹에 대한 자신감이 길러집니다.

쉐도잉이란? 영어 실력을 빠르게 키우는 과학적 방법

쉐도잉(Shadowing)은 원래 전문 통역사 훈련을 위해 개발된 언어 학습 기법으로, 다언어 학자인 Dr. Alexander Arguelles에 의해 대중화된 방법입니다. 핵심 원리는 간단하지만 매우 강력합니다: 원어민의 영어를 들으면서 1~2초의 짧은 지연으로 즉시 소리 내어 따라 말하는 것——마치 '그림자(shadow)'처럼 화자를 따라가는 것입니다. 문법 공부나 수동적인 청취와 달리, 쉐도잉은 뇌와 입 근육이 동시에 실시간으로 영어를 처리하고 재현하도록 훈련합니다. 연구에 따르면 이 방법은 발음 정확도, 억양, 리듬, 연음, 청취력, 말하기 유창성을 크게 향상시킵니다. IELTS 스피킹 준비와 자연스러운 영어 소통을 원하는 분들에게 특히 효과적입니다.

ShadowingEnglish에서 효과적으로 학습하는 방법

  1. 영상 선택: 자연스럽고 명확한 영어가 사용된 YouTube 영상을 선택하세요. TED Talks, BBC 뉴스, 영화 장면, 팟캐스트, IELTS 모범 답변 영상이 좋습니다. URL을 복사해서 검색창에 붙여넣으세요. 짧은 영상(5분 이내)과 실제로 관심 있는 주제부터 시작하는 것이 동기 유지에 효과적입니다.
  2. 먼저 듣고 내용 이해하기: 처음에는 1배속으로 그냥 듣기만 하세요. 아직 따라 말할 필요는 없습니다. 문장의 의미를 파악하고, 화자가 어떻게 단어를 강조하고, 소리를 연결하고, 쉬어 가는지 주목하세요. 내용을 이해한 후 쉐도잉 연습을 하면 효과가 훨씬 좋아집니다.
  3. 쉐도잉 모드 설정:
    • Wait Mode (대기 모드): +3s 또는 +5s를 선택하면 한 문장이 재생된 후 자동으로 잠시 멈춰서 따라 말할 시간을 줍니다. 직접 컨트롤하고 싶다면 Manual을 선택해서 Next를 눌러 진행하세요.
    • Sub Sync (자막 동기화): YouTube 자막이 오디오와 맞지 않을 수 있습니다. ±100ms로 조정해서 정확한 타이밍에 따라갈 수 있도록 맞추세요.
  4. 소리 내어 쉐도잉하기 (핵심 연습): 이것이 연습의 핵심입니다. 문장이 재생되는 순간——또는 일시정지 중에——크고 자신감 있게 소리 내어 따라 하세요. 단순히 단어를 읽는 것이 아니라, 화자의 리듬, 강세, 음의 높낮이, 연음 방식을 그대로 흉내 내는 것이 중요합니다. 목표는 화자의 '그림자'처럼 들리는 것입니다. Repeat 기능으로 같은 문장을 여러 번 반복해서 자연스럽게 입에 붙을 때까지 연습하세요.
  5. 난이도 높이며 꾸준히 연습: 한 구절이 편해지면 더 도전적인 수준으로 올리세요. 속도를 <code>1.25x</code> 또는 <code>1.5x</code>로 높여 빠른 언어 반사 신경을 훈련하세요. Wait Mode를 <code>Off</code>로 설정해서 연속 쉐도잉을 하는 것이 가장 고급스럽고 효과적인 모드입니다. 매일 15~30분씩 꾸준히 연습하면 몇 주 안에 눈에 띄는 변화를 느낄 수 있습니다.

커피 한 잔 사주기

PayPal로 기부하기