Skip to content

Commit 52e80ad

Browse files
committed
fix: update various links leading to the old course
1 parent c487c72 commit 52e80ad

File tree

11 files changed

+20
-16
lines changed

11 files changed

+20
-16
lines changed

sources/academy/glossary/concepts/robot_process_automation.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ With the advance of [machine learning](https://en.wikipedia.org/wiki/Machine_lea
2929

3030
## Is RPA the same as web scraping? {#is-rpa-the-same-as-web-scraping}
3131

32-
While [web scraping](../../webscraping/scraping_basics_legacy/index.md) is a kind of RPA, it focuses on extracting structured data. RPA focuses on the other tasks in browsers - everything except for extracting information.
32+
While web scraping is a kind of RPA, it focuses on extracting structured data. RPA focuses on the other tasks in browsers - everything except for extracting information.
3333

3434
## Additional resources {#additional-resources}
3535

sources/academy/glossary/tools/apify_cli.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ The [Apify CLI](/cli) helps you create, develop, build and run Apify Actors, and
1313

1414
## Installing {#installing}
1515

16-
To install the Apify CLI, you'll first need npm, which comes preinstalled with Node.js. If you haven't yet installed Node, [learn how to do that](../../webscraping/scraping_basics_legacy/data_extraction/computer_preparation.md). Additionally, make sure you've got an Apify account, as you will need to log in to the CLI to gain access to its full potential.
16+
To install the Apfiy CLI, you'll first need npm, which comes preinstalled with Node.js. Additionally, make sure you've got an Apify account, as you will need to log in to the CLI to gain access to its full potential.
1717

1818
Open up a terminal instance and run the following command:
1919

sources/academy/platform/expert_scraping_with_apify/actors_webhooks.md

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,10 @@ slug: /expert-scraping-with-apify/actors-webhooks
77

88
**Learn more advanced details about Actors, how they work, and the default configurations they can take. Also, learn how to integrate your Actor with webhooks.**
99

10+
:::caution Updates coming
11+
This lesson is subject to change because it currently relies on code from our archived **Web scraping basics for JavaScript devs** course. For now you can still access the archived course, but we plan to close it in a few months. This lesson will be updated to remove the dependency.
12+
:::
13+
1014
---
1115

1216
Thus far, you've run Actors on the platform and written an Actor of your own, which you published to the platform yourself using the Apify CLI; therefore, it's fair to say that you are becoming more familiar and comfortable with the concept of **Actors**. Within this lesson, we'll take a more in-depth look at Actors and what they can do.
@@ -39,7 +43,7 @@ Prior to moving forward, please read over these resources:
3943

4044
## Our task {#our-task}
4145

42-
In this task, we'll be building on top of what we already created in the [Web scraping basics for JavaScript devs](/academy/scraping-basics-javascript/legacy/challenge) course's final challenge, so keep those files safe!
46+
In this task, we'll be building on top of what we already created in the [Web scraping basics for JavaScript devs](../../webscraping/scraping_basics_legacy/challenge/index.md) course's final challenge, so keep those files safe!
4347

4448
Once our Amazon Actor has completed its run, we will, rather than sending an email to ourselves, call an Actor through a webhook. The Actor called will be a new Actor that we will create together, which will take the dataset ID as input, then subsequently filter through all of the results and return only the cheapest one for each product. All of the results of the Actor will be pushed to its default dataset.
4549

sources/academy/platform/expert_scraping_with_apify/index.md

Lines changed: 2 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -16,15 +16,11 @@ This course will teach you the nitty gritty of what it takes to build pro-level
1616

1717
Before developing a pro-level Apify scraper, there are some important things you should have at least a bit of knowledge about (knowing the basics of each is enough to continue through this section), as well as some things that you should have installed on your system.
1818

19-
> If you've already gone through the [Web scraping basics for JavaScript devs](../../webscraping/scraping_basics_legacy/index.md) and the first courses of the [Apify platform category](../apify_platform.md), you will be more than well equipped to continue on with the lessons in this course.
20-
21-
<!-- ### Puppeteer/Playwright {#puppeteer-playwright}
22-
23-
[Puppeteer](https://pptr.dev/) is a library for running and controlling a [headless browser](../../webscraping/scraping_basics_legacy/crawling/headless_browser.md) in Node.js, and was developed at Google. The team working on it was hired by Microsoft to work on the [Playwright](https://playwright.dev/) project; therefore, many parallels can be seen between both the `puppeteer` and `playwright` packages. Proficiency in at least one of these will be good enough. -->
19+
> If you've already gone through the [Web scraping basics for JavaScript devs](../../webscraping/scraping_basics_javascript/index.md) and the first courses of the [Apify platform category](../apify_platform.md), you will be more than well equipped to continue on with the lessons in this course.
2420
2521
### Crawlee, Apify SDK, and the Apify CLI {#crawlee-apify-sdk-and-cli}
2622

27-
If you're feeling ambitious, you don't need to have any prior experience with Crawlee to get started with this course; however, at least 5–10 minutes of exposure is recommended. If you haven't yet tried out Crawlee, you can refer to [this lesson](../../webscraping/scraping_basics_legacy/crawling/pro_scraping.md) in the **Web scraping basics for JavaScript devs** course (and ideally follow along). To familiarize yourself with the Apify SDK, you can refer to the [Apify Platform](../apify_platform.md) category.
23+
If you're feeling ambitious, you don't need to have any prior experience with Crawlee to get started with this course; however, at least 5–10 minutes of exposure is recommended. If you haven't yet tried out Crawlee, you can refer to the [Using a scraping framework with Node.js](../../webscraping/scraping_basics_javascript/12_framework.md) lesson of the **Web scraping basics for JavaScript devs** course. To familiarize yourself with the Apify SDK, you can refer to the [Apify Platform](../apify_platform.md) category.
2824

2925
The Apify CLI will play a core role in the running and testing of the Actor you will build, so if you haven't gotten it installed already, please refer to [this short lesson](../../glossary/tools/apify_cli.md).
3026

sources/academy/platform/expert_scraping_with_apify/solutions/integrating_webhooks.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,10 @@ slug: /expert-scraping-with-apify/solutions/integrating-webhooks
77

88
**Learn how to integrate webhooks into your Actors. Webhooks are a super powerful tool, and can be used to do almost anything!**
99

10+
:::caution Updates coming
11+
This lesson is subject to change because it currently relies on code from our archived **Web scraping basics for JavaScript devs** course. For now you can still access the archived course, but we plan to close it in a few months. This lesson will be updated to remove the dependency.
12+
:::
13+
1014
---
1115

1216
In this lesson we'll be writing a new Actor and integrating it with our beloved Amazon scraping Actor. First, we'll navigate to the same directory where our **demo-actor** folder lives, and run `apify create filter-actor` _(once again, you can name the Actor whatever you want, but for this lesson, we'll be calling the new Actor **filter-actor**)_. When prompted about the programming language, select **JavaScript**:

sources/academy/tutorials/node_js/analyzing_pages_and_fixing_errors.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -69,8 +69,6 @@ try {
6969
}
7070
```
7171

72-
Read more information about logging and error handling in our developer [best practices](../../webscraping/scraping_basics_legacy/best_practices.md) section.
73-
7472
### Saving snapshots {#saving-snapshots}
7573

7674
By snapshots, we mean **screenshots** if you use a [browser with Puppeteer/Playwright](../../webscraping/puppeteer_playwright/index.md) and HTML saved into a [key-value store](https://crawlee.dev/api/core/class/KeyValueStore) that you can display in your own browser. Snapshots are useful throughout your code but especially important in error handling.

sources/academy/tutorials/node_js/dealing_with_dynamic_pages.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@ If you're in a brand new project, don't forget to initialize your project, then
4141
npm init -y && npm i crawlee
4242
```
4343

44-
Now, let's write some data extraction code to extract each product's data. This should look familiar if you went through the [Data Extraction](../../webscraping/scraping_basics_legacy/data_extraction/index.md) lessons:
44+
Now, let's write some data extraction code to extract each product's data. This should look familiar if you went through the [Web scraping basics for JavaScript devs](/academy/scraping-basics-javascript) course:
4545

4646
```js
4747
import { CheerioCrawler } from 'crawlee';

sources/academy/webscraping/anti_scraping/mitigation/using_proxies.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,10 @@ slug: /anti-scraping/mitigation/using-proxies
77

88
**Learn how to use and automagically rotate proxies in your scrapers by using Crawlee, and a bit about how to obtain pools of proxies.**
99

10+
:::caution Updates coming
11+
This lesson is subject to change because it currently relies on code from our archived **Web scraping basics for JavaScript devs** course. For now you can still access the archived course, but we plan to close it in a few months. This lesson will be updated to remove the dependency.
12+
:::
13+
1014
---
1115

1216
In the [**Web scraping basics for JavaScript devs**](../../scraping_basics_legacy/crawling/pro_scraping.md) course, we learned about the power of Crawlee, and how it can streamline the development process of web crawlers. You've already seen how powerful the `crawlee` package is; however, what you've been exposed to thus far is only the tip of the iceberg.

sources/academy/webscraping/puppeteer_playwright/executing_scripts/extracting_data.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ import TabItem from '@theme/TabItem';
1212

1313
---
1414

15-
Now that we know how to execute scripts on a page, we're ready to learn a bit about [data extraction](../../scraping_basics_legacy/data_extraction/index.md). In this lesson, we'll be scraping all the on-sale products from our [Fakestore](https://demo-webstore.apify.org/search/on-sale) website. Playwright & Puppeteer offer two main methods for data extraction:
15+
Now that we know how to execute scripts on a page, we're ready to learn a bit about data extraction. In this lesson, we'll be scraping all the on-sale products from our [Fakestore](https://demo-webstore.apify.org/search/on-sale) website. Playwright & Puppeteer offer two main methods for data extraction:
1616

1717
1. Directly in `page.evaluate()` and other evaluate functions such as `page.$$eval()`.
1818
2. In the Node.js context using a parsing library such as [Cheerio](https://www.npmjs.com/package/cheerio)

sources/academy/webscraping/puppeteer_playwright/index.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -61,8 +61,6 @@ npm install puppeteer
6161
</TabItem>
6262
</Tabs>
6363

64-
> For a more in-depth guide on how to set up the basic environment we'll be using in this tutorial, check out the [**Computer preparation**](../scraping_basics_legacy/data_extraction/computer_preparation.md) lesson in the **Web scraping basics for JavaScript devs** course
65-
6664
## Course overview {#course-overview}
6765

6866
1. [Launching a browser](./browser.md)

0 commit comments

Comments
 (0)