Skip to main content

Movies - Songs - Games with Exercises A2 Level

Movies - Songs - Games with Exercises for A2 Level.

1. Tenses with Exercises A2 Level

1.21. Warm-up Video for Future Simple

WARM-UP VIDEO FOR FUTURE SIMPLE

Instructions. Listen and type future expressions with 'will' and frequently used verbs.


------------------------------------

Exercise. Complete each gap with suitable words and expressions you hear from the video.

and that's not a good combination, as it turns out. And yet rather than be scared, most of (1) ………….. that what I'm talking about is kind of cool. I'm going to describe how the gains we make or inspire us to destroy ourselves. And yet if you're anything like me, (2) ………….. that it's fun to think about these things. And that response is part of the problem. OK? That response should worry you. Now take a moment to consider why this might happen. I mean, given how valuable intelligence and automation are, (3) ………….. to improve our technology if we are at all able to. What could stop us from doing this? A full-scale nuclear war? At a certain point, we will build machines that are smarter than we are, and once we have machines that are smarter than we are, (4) ………….. to improve themselves. And then we risk what the mathematician IJ Good called an "intelligence explosion," We don't need Moore's law to continue. We don't need exponential progress. We just need to keep going. The second assumption is that (5) ………….. going. (6) ………….. to improve our intelligent machines. And given the value of intelligence -- I mean, intelligence is either the source of everything we value We want to cure diseases like Alzheimer's and cancer. We want to understand economic systems. We want to improve our climate science. So (7) ………….. this, if we can. The train is already out of the station, and there's no brake to pull. Finally, we don't stand on a peak of intelligence, and we continue to improve our machines, we will produce some form of superintelligence. And we have no idea how long (8) ………….. us to create the conditions to do that safely. Let me say that again. We have no idea how long (9) ………….. us to create the conditions to do that safely. And if you haven't noticed, 50 years is not what it used to be. Another reason we're told not to worry is that these machines can't help but share our values because (10) ………….. literally extensions of ourselves. (11) ………….. grafted onto our brains, and we'll essentially become their limbic systems. Now take a moment to consider that can make changes to itself, it seems that we only have one chance to get the initial conditions right, and even then (12) ………….. to absorb the economic and political consequences of getting them right. But the moment we admit


Key: Look at the key and say aloud the script from the video to improve your English.

and that's not a good combination, as it turns out. And yet rather than be scared, most of (1) (you will feel) that what I'm talking about is kind of cool. I'm going to describe how the gains we make or inspire us to destroy ourselves. And yet if you're anything like me, (2) (you'll find) that it's fun to think about these things. And that response is part of the problem. OK? That response should worry you. Now take a moment to consider why this might happen. I mean, given how valuable intelligence and automation are, (3) (we will continue) to improve our technology if we are at all able to. What could stop us from doing this? A full-scale nuclear war? At a certain point, we will build machines that are smarter than we are, and once we have machines that are smarter than we are, (4) (they will begin) to improve themselves. And then we risk what the mathematician IJ Good called an "intelligence explosion," We don't need Moore's law to continue. We don't need exponential progress. We just need to keep going. The second assumption is that (5) (we will keep) going. (6) (We will continue) to improve our intelligent machines. And given the value of intelligence -- I mean, intelligence is either the source of everything we value We want to cure diseases like Alzheimer's and cancer. We want to understand economic systems. We want to improve our climate science. So (7) (we will do) this, if we can. The train is already out of the station, and there's no brake to pull. Finally, we don't stand on a peak of intelligence, and we continue to improve our machines, we will produce some form of superintelligence. And we have no idea how long (8) (it will take) us to create the conditions to do that safely. Let me say that again. We have no idea how long (9) (it will take) us to create the conditions to do that safely. And if you haven't noticed,50 years is not what it used to be. Another reason we're told not to worry is that these machines can't help but share our values because (10) (they will be) literally extensions of ourselves. (11) (They'll be) grafted onto our brains, and we'll essentially become their limbic systems. Now take a moment to consider that can make changes to itself, it seems that we only have one chance to get the initial conditions right, and even then (12) (we will need) to absorb the economic and political consequences of getting them right. But the moment we admit


Sources

Channel: TED. Can we build AI without losing control over it? | Sam Harris: https://www.youtube.com/watch?v=8nt3edWLgIg


---------------------------------------------

Compiled by Top Grade Edu