Aggregating Flight Pricing Data with Apache Kafka and ksqlDB

Introduction

Lately, I decided to explore some other notable open source projects. Apache Kafka is a project I heard about years ago, but I never encountered work that warranted its use. After I shifted into the software development world and became more accustomed to working with large datasets, specifically through web scraping, Apache Kafka came across my radar once again. Apache Kafka, as defined by the Apache Foundation, is an open source distributed event streaming platform used by thousands of companies for high-performance data pipelines, streaming analytics, data integration, and mission critical applications. Apache Kafka has an extremely high throughput and can scale to large numbers. The core of Apache Kafka is written in Java and its famous Streams API is written in Java as well. I had a sinking feeling, at first, that to use Apache Kafka well, I would have to learn Java. Fortunately, I learned there are many open source Kafka clients/bindings written in other languages, like Python. For this project, I chose to use Confluent’s Kafka Python Client. One of my personal projects that I find really interesting to work on is a Google Flight price scraper. Even though some people see web scraping as dull, I find collecting data and making decisions quite interesting, especially when it revolves around travel. One of my favorite destinations to travel to is Japan. For this project, I chose to incorporate Apache Kafka components to my existing Google Flight scraper. In the end, I want to achieve basic streaming and use ksqlDB to interact with the flight data as its flowing in real time.

Use Case Diagram

To help illustrate this use case, I created a basic diagram in LibreOffice Draw. I hope this diagram explains the use case in an easy way:

How It Works

The Google Flight price scraper is called FlightScraper. FlightScraper is written in Python and uses requests to fetch HTML content from flights.google.com. Once the HTML has been fetched, Beautiful Soup then parses out the flight information from the page. In order to prevent Google’s bot protections from interfering with the data collection, all HTTP requests will go through a rotating proxy endpoint via a third-party provider. Only one node will perform requests using a multithreaded approach. The request load being sent is small compared to other large botnets that may act maliciously. DISCLAIMER: Please act responsibly when scraping data from a website.

The processing is numbered in order by start to finish. That is, 1 signifies FlightScraper sending out the HTTP request to the external proxy endpoint. Once the proxy endpoint receives the request, the proxy then handles the completion of the request to flights.google.com. Once flights.google.com processes the request, the response is sent back to the proxy endpoint that originally established the request and from there, the proxy endpoint relays the response back to the threaded worker that started the whole process. FlightScraper will then parse the data using Beautiful Soup and then build a JSON object that represents the flight.

How Kafka Is Involved

Once the flight data JSON has been created, FlightScraper will then produce a message to the configured Kafka topic. In this case, the Kafka topic is called flight-data. The Kafka message is sent to the Kafka topic that exists on the broker. Like many distributed systems, Kafka can scale to hundreds or even thousands of brokers. The distributed storage platform Ceph operates in a similar fashion. For example, if an user wants to add additional storage to expand the cluster and increase redundancy, the administrator of the Ceph storage cluster can add an OSD (Object Storage Daemon), which causes the cluster to increase in storage and rebalance data. This backend process ensures that, if one OSD fails, the data will be protected. When I read through Kafka’s documentation, I noticed these similarities to Ceph. Just like Ceph’s auto-healing/auto-balancing capabilities, Kafka will ensure messages in a topic are protected from data corruption in the best way it can. Kafka messages sent to a broker are also ordered, which ensures real-time applications built around Kafka don’t receive data that may be “old”. If data changes frequently, then even a message from a minute ago might be considered outdated. Finally, in order to leverage and manipulate the incoming data, ksqlDB is used. ksqlDB is a project that works with Apache Kafka to provide a SQL-like layer to Kafka data. If an user knows SQL-like functions, they can use that knowledge to interact with incoming data. In the following examples, ksqlDB will be used to filter out incoming flights that don’t match certain criteria.

FlightScraper in Action

Now that FlightScraper’s architecture has been outlined, let’s actually start seeing the data it produces. FlightScraper has additional abilities, like being able to produce date ranges based on how long a trip will last. In the following examples, I will have FlightScraper generate 10-day date ranges for flights going from Cleveland (CLE) to Haneda (HND) from today (i.e. 9/2/24) into the future by 180 ranges (each range will have 10 days). As the data flows into the Kafka broker, ksqlDB will then process the data and EMIT the changes as they come in real time.

In order to interact with the Kafka data via ksqlDB, we’ll need to create a stream based on the Kafka topic. Here’s an example of a query that creates a stream based on incoming data to the Kafka broker:

CREATE STREAM flights (origin VARCHAR, destination VARCHAR, embarkDate VARCHAR, embarkTime VARCHAR, airline VARCHAR, arrivalTime VARCHAR, departDate VARCHAR, duration VARCHAR, flightType VARCHAR, price INTEGER, pricePerWay DOUBLE, timestamp VARCHAR) WITH (kafka_topic='flight-data', value_format='json', partitions=3);

Before creating the preceding stream, we need to spin up ksqlDB. You can start ksqlDB by running the following command:

sudo /usr/bin/ksql-server-start /etc/ksqldb/ksql-server.properties
NOTE: Many of the command examples can be found on ksqlDB’s documentation. I’m using ksqlDB via the standalone Debian package, so I’m referencing this. If you’re using this method, you’ll need to configure the ksql-server.properties file first.
If you’re using ksqlDB in a container or via another method, Confluent provides documentation for that too.
Now, let’s connect to ksqlDB. In my environment, I’m running ksqlDB on the same node as FlightScraper (i.e. localhost):
/usr/bin/ksql http://0.0.0.0:8088

Now, for the grand finale, let’s see how the data gets represented:

(Example Using Just a Simple SELECT *)

As I mentioned before, ksqlDB allows users to leverage SQL-like functionality to manipulate and change how the data stream is represented. In the next example, I’m going to filter the price and only return flights that are less than or equal to $1,050:

(Example Using a <= Comparison)

The preceding examples are just basic use cases of Apache Kafka and ksqlDB. From my experiences with this project, I can now think of many different use cases where streaming data can be built into real-time applications. Apache Kafka and ksqlDB make a great pair for building dynamic and real-time platforms. If you haven’t checked out Apache Kafka or ksqlDB, you should!

Credits:

Apache Kafka in 100 Secondshttps://www.youtube.com/watch?v=uvb00oaa3k8

ksqlDB SQL Quick Referencehttps://docs.ksqldb.io/en/latest/developer-guide/ksqldb-reference/quick-reference/

apache/kafkahttps://hub.docker.com/r/apache/kafka

ksqlDB Quick Starthttps://ksqldb.io/quickstart.html

Networking: People and Computers

Learning Objectives

    • Define the term networking. Explain that networking goes beyond just technology.
    • Explain the importance of communication.
    • List ways that technology can help make communication between humans more effective. Mention real world examples of communication platforms.
    • Explain that computer networking can enable positive and negative communication.

What is Networking?

You may have heard the term “networking” before. The word “networking” can mean different things. You could define networking as, “the linking of computers to allow them to operate interactively”. At least, that’s how Oxford defines it. In this section and for all other sections in the series, let’s define networking using IBM’s definition, which is “defining a set of protocols (that is, rules and standards) that allow application programs to talk with each other without regard to (their) hardware and operating systems”. There’s networking in technology, but there’s also networking in everyday life. When we network with different people, we learn about them. We learn things such as their likes/dislikes, where they come from, and even who they’re related to. If we compare technology to human beings, you might be surprised to hear that technology and human beings communicate in similar ways.

Now, that doesn’t mean you’re a robot…

Though, I admit, being a robot for a day would be kinda cool…

The point is humans create technology, so it makes sense that we would create things in a way we understand. In later sections, we’ll see examples of how computers talk to each other. For now, let’s just keep the idea that technology and human beings communicate in similar ways.

Ok, so we now know that technology and humans communicate similarly, so what?

If we take a step back and look at how we live our lives, do you think communication is important?

I would say, “Heck yeah!”

Without communication, how would we know how a person feels? If you were sick and needed help, how would you be able to signal someone? Simple human communication, like talking, can only go so far. Facial expressions and non-verbal cues are other ways that we can communicate. Going back to talking, what happens if you want to talk with someone who is not physically with you?

Maybe we could reach them by megaphone?

That might get annoying for the other people around you, and on top of that, the person you intend the message for may not be around to hear it. It’s possible that the person you want to talk to may be 2,000 miles away!

Going back to the megaphone, what happens if you want the message to be heard only by that specific person? Maybe the message is a secret. Screaming your secret through a megaphone would definitely make that no longer a secret.

Computers to the Rescue!

Computer networking can solve many of these communication problems. First, some computer networks have the ability to send data at the speed of light, which given a medium like single-mode fiber, is roughly 124,188 miles per second. That’s quicker than any speech I could give!

Computers also have the ability to use encryption, which can make things for peeping eyes very hard to read.

Let’s say we had the following file of text:

the cat is in the tree.

Let’s now look at that same file when it’s encrypted using a 256-bit AES cipher:

U2FsdGVkX1/ysfGqqcUyY9dzVVKt3AgKs9SphLA9VwN1M01ry9rwJ5/hss+fBKQA

I don’t know about you, but I don’t understand a word of that!

Networking is important, whether humans do it directly or use a computer to assist. Computers have allowed us to communicate in ways that redefine what it means to be human. Think about it, we have social media like Facebook that can connect you with people from all over the world. We have cellular networks that make talking with someone over the phone happen in seconds! We even have applications like Google Meet where several people can meet using video.

Just like human communication, communication using computer networks can have positive and negative consequences. With data traveling at crazy speeds, information can be spread quickly to all corners of the world. While relaying life saving news can be seen as a positive, bullying a classmate over social media can be seen as a negative.

As we learn more about computers and how they talk to each other, please remember that we must act morally. A computer can do quite a bit of harm if someone tells it to. When using technology, let’s network positively and solve problems that can make this world a better place to live in 🙂

Resources & Review

Credits

Images provided by Creative Commons contributors.

Adventures in Computing

Greetings!

First things first, welcome to Adventures in Computing! 🙂

I’m about to help you navigate the fundamentals of information technology. Now, I know what you’re thinking, technology is stuff like this…

What the heck? Is that BINARY!? OH NO! 🙁

Maybe I could…

Now wait a sec! We haven’t even gotten started yet!

As we progress, I will admit concepts may get tricky, but that’s why you have me! 🙂 Also, I want to let you in on a little secret. Looking back at my experiences and the people I worked with, there’s one consistent message I picked up on: technology evolves every day and if someone tells you they know it all, they don’t. I have seen help desk technicians perform tasks that software engineers couldn’t. Skills vary from person to person. Sure, software engineers may be well versed in math, but can they also be great communicators or business magnates? Most of the time the answer is no. We all have skills that make us unique and talented. So please, before you start this series, believe in yourself and know that you are far more capable than you might think.

What to Expect

When you read the title “Adventures in Computing”, you may think there’s a whole bunch of different topics you could discuss. That’s definitely true. For this particular series, the goal is to introduce some broad concepts and specifics, while trying to keep a subtle tone. I also want this series to be applicable to all ages. During the series, there will be interactive prompts that help drive the concepts home. Each section will also have learning objectives. If you have any feedback or need help on a certain topic, please leave a comment. With that being said, let’s begin 🙂

Credits

Images provided by Creative Commons contributors.

Fun with Flask: Creating Simple GET Endpoints

Python is a great programming language to build web applications with. Not only is the entry bar lower than other languages, there’s a wide variety of web frameworks to choose from (e.g. Flask, Django).

My personal favorite is Flask. Flask is easy to use and has the ability to scale out (i.e. Blueprints). When building my personal website, I wanted to keep things simple. The design is minimal and serves the intended purpose. With the endpoints however, I wanted to be more creative. Currently, there are two endpoints: /skills and /education. Both /skills and /education accept only two HTTP methods: GET and OPTIONS. Later on, I’m going to create some cooler endpoints that integrate with other Python libraries. However, for now, I think querying endpoints for data is just as cool.

card.mcclunetechnologies.net is my personal website. The site is essentially my virtual business card. I want to have endpoints that are under-the-radar and return more information about myself. Right now, you can query /skills and /education by navigating to the endpoints directly. You can also send a GET request using a tool like curl.

Both endpoints return JSON responses. When querying an endpoint like /skills, Python will open a connection to a remote MySQL database and fetch all information within the appropriate table. As the OPTIONS method describes, /skills will return the following:

user@debian:~$ curl -s -XOPTIONS https://card.mcclunetechnologies.net/skills
Supported Methods for /skills: GET
Provides the following information: skill_name (string), skill_description (string), years_of_experience (integer), and comfort_level (string; can either be low, medium, or high)

Sending a GET request to /skills will return the MySQL fetchall() response dumped into JSON:

user@debian:~$ curl -s -XGET https://card.mcclunetechnologies.net/skills
[
[
"Active Directory",
"Active Directory (AD) is a directory service developed by Microsoft for Windows domain networks. It is included in most Windows Server operating systems as a set of processes and services. Initially, Active Directory was only in charge of centralized domain management. However, Active Directory became an umbrella title for a broad range of directory-based identity-related services.",
7,
"medium"
],
[
"Ansible",
"Ansible is an open-source software provisioning, configuration management, and application-deployment tool enabling infrastructure as code. It runs on many Unix-like systems, and can configure both Unix-like systems as well as Microsoft Windows.",
2,
"medium"
],
[
"Apache CloudStack",
"CloudStack is open-source cloud computing software for creating, managing, and deploying infrastructure cloud services.",
3,
"medium"
],
[
"Bash",
"Bash is a Unix shell and command language written by Brian Fox for the GNU Project as a free software replacement for the Bourne shell. First released in 1989, it has been used as the default login shell for most Linux distributions and all releases of Apple's macOS prior to macOS Catalina.",
4,
"medium"
],
[
"Cisco IOS",
"Cisco Internetwork Operating System (IOS) is a family of network operating systems used on many Cisco Systems routers and current Cisco network switches.",
7,
"medium"
],
[
"Git",
"Git is a distributed version-control system for tracking changes in any set of files, originally designed for coordinating work among programmers cooperating on source code during software development.",
4,
"medium"
],
[
"Linux",
"Linux is a family of open-source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991, by Linus Torvalds. Linux is typically packaged in a Linux distribution.",
7,
"high"
],
[
"Nagios",
"Nagios Core, formerly known as Nagios, is a free and open-source computer-software application that monitors systems, networks and infrastructure.",
6,
"medium"
],
[
"Python",
"Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects.",
2,
"medium"
],
[
"Technical Support",
"Technical support (often shortened to tech support) refers to services that entities provide to users of technology products or services. In general, technical support provides help regarding specific problems with a product or service, rather than providing training, provision or customization of the product, or other support services.",
7,
"high"
]

Here’s the /skills endpoint within my Flask view:

@about.route("/skills", methods=["GET", "OPTIONS"])
def skills():
if request.method == 'GET':
conn = mysqlConn()
skillsCursor = conn.cursor()
skillsCursor.execute("SELECT * FROM skills")
skillsInfo = skillsCursor.fetchall()
skillsCursor.close()
conn.close()
return json.dumps(skillsInfo, indent=4)
elif request.method == 'OPTIONS':
return skillsOptions

/skills and /education don’t have filtering capabilities, however, you can use a tool like jq to achieve similar results. One example is filtering just the skill names:

user@debian:~$ curl -s -XGET https://card.mcclunetechnologies.net/skills | jq .[][0]
"Active Directory"
"Ansible"
"Apache CloudStack"
"Bash"
"Cisco IOS"
"Git"
"Linux"
"Nagios"
"Python"
"Technical Support"

Explaining & Illustrating curl

curl is an exceptionally useful program. As described on the project homepage (https://curl.se/), curl is a tool to transfer data from or to a server, using one of the supported protocols. curl can be used to send & receive data with the following protocols:

  • DICT
  • FILE
  • FTP
  • FTPS
  • GOPHER
  • HTTP
  • HTTPS
  • IMAP
  • IMAPS
  • LDAP
  • LDAPS
  • POP3
  • POP3S
  • RTMP
  • RTSP
  • SCP
  • SFTP
  • SMB
  • SMBS
  • SMTP
  • SMTPS
  • TELNET
  • TFTP

To better explain curl, I will demo curl on Ubuntu 14.04. First, I will execute curl http://gitlab.com

The command executes very quickly. However, a lot is actually being performed in the background.

curl http://gitlab.com

When the user executes curl http://gitlab.com, a request is sent from the application (i.e. curl) to the kernel.

The kernel acts as a middle man between the system’s software applications and hardware. curl needs to talk to some of the hardware components, including the CPU, memory, and network adapter. Given that curl is a network application, the kernel definitely needs to talk with the network adapter. When curl http://gitlab.com is executed, the user is telling curl to send some data over HTTP, in hopes of a response.

In order to visualize the HTTP data being sent to http://gitlab.com, I will use WireShark to sniff the outgoing packets. Below is the packet capture performed while executing curl http://gitlab.com:

My computer is currently using Google’s public DNS server (8.8.4.4). Given that gitlab.com is out on the Internet, my computer has to send a DNS request to Google so the domain name can be translated to an IP address. Gitlab.com appears to be at 52.167.219.168. Now that my computer knows the IP address of gitlab.com, HTTP requests can be sent. The curl HTTP requests go from my computer, to my ISP gateway, and out to 52.167.219.168. The processes appear to have completed and data was received.

Let’s see what WireShark collected from the HTTP request:

The text highlighted in red was the request my computer sent to gitlab.com’s server. The text highlighted in blue is what was returned from gitlab.com’s server. When the request was sent to gitlab.com’s server, gitlab.com returned a 301 status. The HTTP 301 status code means Moved Permanently. This code is usually thrown when an user accesses a site and the web server redirects them to another. To prove this holds true, let’s open a web browser and go to http://gitlab.com.

When launching the HTTP request from my web browser, the server redirected me to this:

The URL http://gitlab.com takes you to the home page of gitlab.com:

Hard to believe all of that happens just by executing a small command! The preceding was the whole curl process, from the kernel to over the network, and completed on gitlab.com’s server.

(Originally posted on June 8th, 2017. Updated on December 29th, 2020)

Seattle: A Vulnerable Web Application (Walkthrough)

Currently, I’ve been gearing up for a cyber security conference which includes a CTF (Capture the Flag) competition. Being a newbie in the realm of computer security, I have been practicing my ethical hacking skills with the help of open source applications. There are so many free tools on the Internet. One of them being Seattle, an open source Linux distribution that includes a vulnerable web application. For more information, please follow this link: https://www.gracefulsecurity.com/vulnvm/

To give you an idea of what this application looks like, here is a screenshot:

The web application appears to be an online music store. This application includes some of the following vulnerabilities:

SQL Injection (Error-based)
SQL Injection (Blind)
Reflected Cross-Site Scripting
Stored Cross-Site Scripting
Insecure Direct-Object Reference
Username Enumeration
Path Traversal
Exposed phpinfo()
Exposed Administrative Interface
Weak Admin Credentials

During this walkthrough, I will point out two of the vulnerabilities:

The first thing I would do in any hacking situation would be reconnaissance. This includes using port scanners such as nmap. Even though the application is listening on port 80 (HTTP), it’s still wise to see if any other attack vectors exist.

As I suspected, only 80 is open. Now, I will run nikto (from my Kali Linux VM) in order to see which HTTP vulnerabilities can be found.

nikto produced all sorts of different things. However, two things pop out at me. One of those things include detecting phpinfo() output. On web servers running PHP, you can create a file that outputs information about the PHP environment. Though useful to a system administrator, information provided by phpinfo()can be detrimental if fallen into the hands of an attacker.

Typically, this file is created by using the following PHP script:

<?php

phpinfo();

?>

If this file is in the web server root, an attacker can navigate to this file and see all of that precious information. Below is an example:

This is something we would definitely need to address, especially when dealing with web server security.

For the second vulnerability, we’ll perform directory traversal. This vulnerability allows an attacker to utilize an improperly programmed script in order to “traverse” the system’s file directories from outside the web server root. Using directory traversal can allow an attack to see important files that might contain passwords, configurations, etc.

Luckily, referring back to our nikto scan, I found something really interesting.

I feel that there might be some more PHP vulnerabilities. The first thing I check is the dynamic PHP pages. After a couple of tries, I eventually find a vulnerable PHP page (download.php). Using the netcat application on port 80, I was able to inject some malicious traffic in order for me to see the /etc/passwd file. The attack is illustrated below:

Using the “dot dot slash” technique, I was able to traverse up the directories and uncover the /etc/passwd file. This is a high risk vulnerability and would need to get addressed as soon as possible.

I uncovered other vulnerabilities, however, for the sake of brevity, I am not going to discuss those. If you would like to give this challenge a try, please refer to the link above. I used my trusty Oracle VM VirtualBox to setup my pentesting lab. The other cool addition to the Seattle practice application is the ability to auto-import the system into Oracle VM VirtualBox for a quick setup.

(Originally posted on July 1st, 2016. Updated on December 29th, 2020)

Please Bring Back Cub Linux!

Cub Linux (formerly Chromixium) is a great Linux distribution that mixes both the Chrome and Ubuntu experience. Cub Linux’s development has officially stopped, however, there is hope that Cub Linux will carry on. There is talk that a fork of Cub Linux is in-development. The forked project is called Phoenix Linux. For more information, please visit this open issue:

https://github.com/CubLinux/one/issues/4

I commented on the issue:

I just want to say that I continue to use Cub Linux everyday! I love Cub Linux! 

I don’t have a great amount of development experience within furthering Linux OS features. However, if there is anything I can do to help, please let me know! I have taken Cub Linux (Ubuntu 14.04) and upgraded it to Ubuntu 16.04. There were some features that broke (going from 14.04 to 16.04). However, it still works okay for me.

Very eager to see Phoenix Linux! 

I have to speak my mind on this project because Cub Linux needs to continue. I understand that in the open source community, there can be developers that feel unappreciated. I am writing this to say that every project in the open source community is welcome and appreciated! No matter what a project’s purpose is, everyone should be welcome to contribute to open source applications.

Thank you to anyone reading this post! Please spread the word about Cub Linux! Here are some resources to get you acquainted with what Cub Linux is, if you don’t know already:

https://en.wikipedia.org/wiki/Cub_Linux

http://www.makeuseof.com/tag/replicate-chrome-os-laptop-cub-linux/

https://github.com/CubLinux

(Originally posted on August 5th, 2017. Updated on December 29th, 2020)

SANS Holiday Hack 2015: Gnome in Your Home

The SANS Institute created a fun hacking challenge for this year’s Holiday Hack. Everything from packet analysis to web exploits were included. The packet capture showed how the devious Gnomes were taking pictures of children and receiving them through DNS queries. To be honest, I didn’t think this type of malicious intent was possible (especially through DNS queries). Here’s what I’m talking about:

The following is the packet capture with the image encoded in Base64. Using tools such as Burp Suite and the Linux base64 utilities will help uncover the image secretly hiding within.

Here is the packet capture with the UDP stream of DNS requests being sent to the SuperGnomes. SuperGnomes were the super computers that housed the data collected from the individual GIYH’s.

After exporting the UDP stream to a RAW format, I then used foremost (a tool contained within Kali Linux) to carve out the picture. The JPG was found after decoding the base64 encoding within the capture.

Here is 00000000.jpg, which was the file being sent over the Gnome’s command and control channel:

The packet capture challenge was just one of many activities in this year’s Holiday Hack. For the sake of brevity, I am only going to post this one. However, if you are interested in doing this challenge next year, please check out the website at https://holidayhackchallenge.com or by going to https://sans.org.

(Originally posted on January 10th, 2016. Updated on December 29th, 2020)

OSPF Routing

This is an OSPF single area network. OSPF or Open Shortest Path First is an interior gateway routing protocol used in dynamic network architectures.

The topology above is a simple OSPF setup, nothing fancy. Hopefully, I can expand on this network in the future and make it more sophisticated.

(Originally posted on September 29th, 2015. Updated on December 29th, 2020)

IGRP & EIGRP Routing

As you can see, GNS3 does an awesome job with simulating real life networks. In this lab, I simulated two Cisco 3700 Series Routers running EIGRP with the autonomous system ID of 12. The two routers are connected via Serial and the network layout is as follows:

192.168.2.2 (My real life network adapter; I can communicate with the virtual network from the real world)

192.168.2.4 (R1’s Fast Ethernet0/0 adapter, in order to communicate with the outside world)

192.168.15.10 (AS12) – Network connection in order to bridge the two networks together (192.168.2.0 and 174.16.34.0)

192.168.15.11 (AS12) – Network connection in order to bridge the two networks together (192.168.2.0 and 174.16.34.0)

174.16.34.1 – Virtual FastEthernet0/0 interface on other side of virtual network.

In the end, I was able to ping from my real Windows laptop to R2’s FastEthernet0/0 interface (172.16.34.1). Although I had to add the routes manually in Windows, this still proves connectivity from the real world to the virtual world.

(Originally posted on September 28th, 2015. Updated on December 29th, 2020)