Request for information – Business performance and ownership

Under the authority of the Statistics Act, Statistics Canada is hereby requesting the following information, which will be used solely for statistical and research purposes and will be protected in accordance with the provisions of the Statistics Act and any other applicable law. This is a mandatory request for data.

Business dynamics

Corporate insolvency microdata

What information is being requested?

Information on corporations (legal names, trade names and addresses) that have filed for corporate insolvency is being requested.

What personal information is included in this request?

This request does not include personal information.

What years of data will be requested?

Monthly data beginning in 2006 (ongoing)

From whom will the information be requested?

This information is being requested from the Office of the Superintendent of Bankruptcy.

Why is this information being requested?

Statistics Canada is requesting the Corporate Insolvency Microdata to help provide timely statistics on permanent firm closures. The COVID-19 pandemic has led the Government of Canada to introduce a number of measures such as the Canada Emergency Wage Subsidy, the Canada Emergency Business Account and the Canada Emergency Commercial Rent Assistance Program to support businesses and limit the number of business failures during the pandemic. Timely measures of permanent business closures will provide information on whether the objectives of these support programs are being met. The success of these programs will directly impact Canadians, as the survival of businesses during the pandemic directly affects the employment opportunities available to Canadians.

Statistics Canada may also use the information for other statistical and research purposes.

Why were these organizations selected as data providers?

As part of its mandate, the Office of the Superintendent of Bankruptcy is responsible for administration of the Bankruptcy and Insolvency Act. As such, it maintains up-to-data data on corporate bankruptcies in Canada.

When will this information be requested?

November 2020 and onward (monthly)

When was this request published?

October 28, 2020

Business ownership

Co-operatives businesses

What information is being requested?

Statistics Canada is requesting information on active co-operatives in Canada. A non-financial co-operative is a corporation that is legally incorporated under specific federal, provincial or territorial co-operative acts and that is owned by an association of people seeking to satisfy common needs, such as access to products or services, sale of products or services, or employment.

Information included in this request includes: business information such as the name and contact of the co-operatives, information to identify the area of activity of the co-operative, and a list of closures, dissolutions, amalgamations or name changes that may have taken place.

What personal information is included in this request?

This request does not contain any personal identifiers.

What years of data will be requested?

Data for all active co-operatives, as of December 31, 2020.

From whom will the information be requested?

This information is being requested from information services, business support services and other provincial and territorial public administration.

Why is this information being requested?

Statistics Canada requires this information in order to produce custom tabulations as part of a joint project with Innovation, Science and Economic Development (ISED) Canada. The produced tabulations will replace ISED's longstanding survey on co-operatives in Canada.

Co-operative businesses have an important economic role to play in generating jobs and growth in communities across Canada. Existing in every sector of the economy, co-operatives provide needed infrastructure, goods and services to over 8 million members and jobs to more than 95,000 Canadians. This project offers Canadians, policymakers, researchers and industry stakeholders an accurate depiction of the size and makeup of this sector.

Statistics Canada may also use the information for other statistical and research purposes.

Why were these organizations selected as data providers?

The organizations manage and maintain the provincial co-operative registry for their respective provinces. The data providers are an entity of the provincial government, and the only source of the required data. In collaboration with Innovation, Science and Economic Development (ISED) Canada, the data are used to replace a longstanding ISED survey with more timely, accurate and cost effective statistics.

When will this information be requested?

February 2022 and onward (annually)

When was this request published?

February 21, 2022

Financial statements and performance

Financial sector data

What information is being requested?

The desired information includes financial information from federally regulated financial institutions, including assets and debts aggregated by institution, with counterparty information broken out where available, and loan level data containing associated characteristics such as type of loan and borrowing terms. The counterparty information will specify how much lending goes to each of the other sectors in the economy.

What personal information is included in this request?

This request does not contain any personal information.

What years of data will be requested?

All current data holdings, historical (as available), and on an ongoing basis.

From whom will the information be requested?

This information is being requested from the Office of the Superintendent of Financial Institutions (OSFI)

Why is this information being requested?

Statistics Canada is requesting this information to develop and publish statistics on financial activity and lender/borrower relationships in the Canadian economy. The National Economic Accounts, including the Financial and Wealth Accounts (FWA), contain estimates on financial services with related incomes, assets, and liabilities (i.e. debt) broken down by various levels of sector and instrument detail. The additional information will help validate and complement currently available data holdings.

As Canada's banking industry regulator, OSFI already collects this data. This acquisition will avoid duplication of efforts and prevent increased burden for respondents.

The overall result of acquiring these new data will be an increased level of quality and detail of national financial statistics. This means policymakers, researchers, and other data users will have a more precise and detailed portrait of the financial system in Canada.

Statistics Canada may also use the information for other statistical and research purposes.

Why were these organizations selected as data providers?

OSFI is the national regulator for the financial sector in Canada and thus has the legal authority to collect this type of detailed financial data.

When will this information be requested?

This information is being requested in September 2021.

What Statistics Canada programs will primarily use these data?

When was this request published?

August 18, 2021

Summary of Changes

February 2024 – Inclusion of additional details on requested loan level information.

Other content related to Business performance and ownership

Business financing and supporting programs data

What information is being requested?

The data requested are the name of the enterprises, business numbers, addresses of the enterprises, program data (projects, agreements), the value, date and type of support to the enterprise and the name of the program stream.

What personal information is included in this request?

This request does not contain any personal information.

What years of data will be requested?

Annual data from January 2018 to latest year available.

From whom will the information be requested?

This information will be requested from:

  • Business Development Bank of Canada
  • Export Development Canada
  • Ministère de l'Économie, de l'Innovation et de l'Énergie du Québec
  • Institut de la statistique du Québec
  • Secrétariat du Conseil du trésor du Québec
  • Ministère de la Cybersécurité et du numérique du Québec
  • Conseil de l'innovation du Québec
  • Ontario Ministry of Economic Development, Job Creation and Trade
  • Canadian Commercial Corporation

Why is this information being requested?

Statistics Canada has been acquiring on an annual basis data on federal support to innovation and growth from all departments through the Business Innovation and Growth Support (BIGS) program since 2018. To complete this portrait and better understand business innovation in Canada, data from provincial and crown corporations are required.

Statistics Canada requires this information to create and publish statistics on innovation and growth support to businesses in Canada. These statistics will help provide a more accurate picture on which to design and optimize programs for the benefit of Canadians and will be used by policy makers, researchers, industry stakeholders to demonstrate the extent to which governments are supporting Canadian businesses and the economy. Statistics Canada may also use the information for other statistical and research purposes.

Why were these organizations selected as data providers?

These organizations have been identified as having detailed information on business innovation and growth which will contribute to fill out the current data gaps. As for the provincial organizations, a pilot project is being conducted for this next cycle with the addition of data from Ontario and Québec. Future cycles will most likely include other provinces.

When will this information be requested?

September 2024.

What Statistics Canada programs will primarily use these data?

When was this request published?

June 14, 2024

Monthly Survey of Food Services and Drinking Places: CVs for Total Sales by Geography - July 2020

CVs for Total sales by Geography
Table summary
This table displays the results of CVs for Total sales by Geography. The information is grouped by Geography (appearing as row headers), Month and percentage (appearing as column headers).
Geography Month
201907 201908 201909 201910 201911 201912 202001 202002 202003 202004 202005 202006 202007
percentage
Canada 0.69 0.57 0.59 0.56 0.58 0.61 0.67 0.59 0.63 1.22 1.29 1.13 1.21
Newfoundland and Labrador 2.87 2.49 3.13 3.19 2.77 3.06 2.94 3.17 3.10 4.99 4.02 3.97 5.25
Prince Edward Island 6.84 4.93 4.01 4.53 4.75 4.16 3.67 3.40 2.84 2.54 2.84 3.35 4.18
Nova Scotia 4.65 4.62 2.76 2.94 3.45 3.56 2.06 2.95 2.93 5.03 5.04 3.97 4.09
New Brunswick 2.28 1.30 1.56 1.87 1.45 1.40 1.35 2.16 2.47 4.36 4.44 3.89 3.43
Quebec 1.97 1.41 1.32 1.26 1.37 1.22 1.37 1.17 1.38 3.74 3.47 2.69 2.86
Ontario 1.11 0.94 1.04 0.96 0.99 1.02 1.05 0.97 1.03 1.97 2.14 1.89 1.99
Manitoba 2.43 2.74 2.18 2.42 1.95 2.00 1.92 1.80 2.18 4.91 4.17 3.73 5.00
Saskatchewan 1.92 1.92 1.58 1.59 1.79 1.56 1.51 1.68 1.98 3.68 3.32 2.66 3.19
Alberta 1.32 1.24 1.18 1.23 1.29 1.33 1.37 1.29 1.76 3.07 3.41 3.11 2.61
British Columbia 1.69 1.57 1.60 1.65 1.62 1.96 2.45 1.98 1.89 3.18 3.45 3.18 3.81
Yukon Territory 5.95 4.95 5.88 7.06 6.05 6.69 7.22 5.05 4.97 5.09 5.95 6.91 4.08
Northwest Territories 1.00 0.91 1.00 1.46 1.59 0.88 0.98 0.80 0.85 2.33 2.10 1.46 2.39
Nunavut 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00

Analysis 101, part 4: Case study

Catalogue number: 892000062020012

Release date: September 23, 2020

In this video, we will review the steps of the analytical process.

You will obtain a better understanding of how analysts apply each step of the analytical process by walking through an example. The example that we will discuss is a project that examined the relationship between walkability in neighbourhoods, meaning how well they support physical activity, and actual physical activity for Canadians

Data journey step
Analyze, model
Data competency
Data analysis
Audience
Basic
Suggested prerequisites
Length
9:01
Cost
Free

Watch the video

Analysis 101, part 4: Case study - Transcript

(The Statistics Canada symbol and Canada wordmark appear on screen with the title: "Analysis 101, part 4: Case study")

Analysis 101: part 4 - Case Study

Hi, welcome to our analysis 101 case study. Before you watch this video, make sure you've watched videos 123 so that you're familiar with the three stages of the analytical process.

Learning goals

In this video we will review the steps of the analytical process and you will obtain a better understanding of how analysts apply each step of the analytical process. By walking through an example. The example that we will discuss is a project that examined the relationship between walkability in neighborhoods, meaning how well they support physical activity. An actual physical activity of Canadians.

Steps in the analytical process

(Diagram of 6 images representing the steps involved in the analyze phase of the data journey where the first steps represent the making of an analytical plan, the middle steps represent the implementation of said plan and the final steps are the sharing of your findings.)

Throughout this video will refer back to the six steps of the analytical process and illustrate these steps through our walkability example.

What do we already know?

For analytical plan, let's start by understanding the broader context. What do we already know about the topic? Well, we already know that obesity is a problem in Canada. Insights from the Canadian health measures survey show that 29% of Canadian children and youth are overweight or obese while 60% of Canadian adults are overweight or obese. We also know that many Canadian adults and children are not active enough data from the Canadian health measures survey show that 33% of Canadian children and youth are meeting the physical activity guidelines, meaning that about 66% do not meet requirements. Likewise, 18% of Canadian adults are meeting the physical activity guidelines.

(Texte: "Without being aware of it, our neighbourhoods and how they are built influence how healthy we are.")

These challenges have led to increased attention around the idea of changing the environment in which we live to help Canadians make healthier lifestyle choices. This idea was the focus of the 2017 chief public health officers report on the state of public health in Canada, which noted that shifting behaviors is challenging. What would help Canadians become more active? More parks, better walking paths, or safer streets? Should policy makers look at crime rates? The list is endless.

What do we already know? Environments shape our health

There are a number of ways that our environment can influence our health behaviors. For example, our built environment such as how walkable our neighborhood is, or our health behaviors like how long we commute, or how many sports we participate in, can have an impact on our mental and physical health. Think about your own neighborhood. Does the design of your neighborhood make it easy or hard for you to walk to and from places or to get outside to exercise or play with your kids?

What do we already know? Knowledge gaps

Now that we understand the broader topic, let's identify the knowledge gaps. Previous studies had already demonstrated that Canadian adults living in more walkable neighborhoods are more active. However, recent findings focused on a few Canadian cities and did not provide national estimates. Likewise, previous work focus on how to get adults more active, but was limited in the analysis for children.

What is the analytical question?

Identifying a relevant analytical question is important to defining the scope of your work. For this study, the main question was does the relationship between walkability and physical activity in Canada differ by age? That's a clear, well defined question, and it's written in plain language.

Prepare and check your data

(Texte: Canadian Active Living Environments Database)

Now it's time to implement our plan. The first step is preparing and checking our data. Given that we had access to a new Canadian walkability data set, we wanted to leverage this new data source before we go any further. Let me give you some more context on walkability. Essentially, walkability means how well in neighborhoods supports physical activity. Walkability is higher in Denser neighborhoods, such as those with more people living on one block. It's also higher in neighborhoods with more amenities, like access to transit, grocery stores or schools or neighborhoods with well connected streets. Each neighborhood was assigned to walkability score from one to five. If you live in a suburban area outside the city core, your neighborhood will likely have a walkability score of three. Downtown neighborhoods will likely have a score of four or five.

Perform the analysis

(Texte: Canadian Active Living Environments; Canadian Community Health Survey (ages : 12 +years); Canadian Health Measures Survey (Ages: 3 to 79 years))

For our analysis, we linked external walkability data to two major Statistics Canada health surveys. We made use of both surveys because they use different measurements for physical activity. One survey asked respondents to self report their daily exercise while the other made use of accelerometers. Accelerometers capture minute by minute movement data. Think of it as a fancy pedometer.

Summarize and interpret your results

After some data cleaning concept, defining and lots of documenting our analytical decisions, we then started crafting a story based on our findings are main finding was that adults in more walkable neighborhoods are more active. However, different patterns were observed for children and youth. Their physical activity was pretty consistent across different levels of neighborhood walkability. When we started this work, there was a lot of evidence linking physical activity and neighborhood walkability in adults. But only a few studies examining children. Some studies found that children were more physically active in more walkable neighborhoods, while others stated the opposite. Finding we performed age specific analysis to examine this in greater detail and found that children under 12 are more active in neighborhoods with low walkability, like car oriented suburbs, which may have larger backyards, schoolyards, and parks where they can run around and play safely. But the relationship for children 12 and over with similar to that of. Adults, they were more physically active in higher walkability neighborhoods. Summarizing your results in simple terms is key to getting your message across to various audiences. As you learned in previous videos, translating complex analysis into a cohesive story is important. It's your job to digest the information and guide your reader through your story line.

Summarize and interpret your results: So what?

Interpreting the results also involves helping your audience understand the. So what factor for us. This meant highlighting that walkability is a relevant concept for adults, but we need to think differently about how to support physical activity in children. For example, what about parks, neighborhood safety, and crime rates? Explain to your reader how your findings fit within the existing body of literature. It's also a great practice to communicate what needs to be done going forward to advance our knowledge and flag any limitations to the study.

Disseminate your work

This project led to some very interesting analysis which we share it in different ways with stakeholders, policy makers and Canadians. Two major research papers were published for armor expert audience. While we also created an infographic on key points for a more general audience.

Summarize and interpret your results: So what?

(Diagram of 6 images representing the steps involved in the analyze phase of the data journey where the first steps represent the making of an analytical plan, the middle steps represent the implementation of said plan and the final steps are the sharing of your findings.)

The analytical process is a journey. It often takes much longer than you anticipate. First understand your topic and take your time to develop a clear and relevant analytical question. Make sure to check and review your data throughout the process and strive to translate your findings into a meaningful and interesting narrative. That way people will remember your work.

(The Canada Wordmark appears.)

What did you think?

Please give us feedback so we can better provide content that suits our users' needs.

Analysis 101, part 3: Sharing your findings

Catalogue number: 892000062020011

Release date: September 23, 2020

In this video, you will learn how to summarize and interpret your data and share your findings. The key elements to communicating your findings are as follows:

  • select your essential findings,
  • summarize and interpret the results,
  • organize and assess your reviews and
  • prepare for dissemination
Data journey step
Analyze, model
Data competency
Data analysis
Audience
Basic
Suggested prerequisites
Length
11:38
Cost
Free

Watch the video

Analysis 101, part 3: Sharing your findings - Transcript

(The Statistics Canada symbol and Canada wordmark appear on screen with the title: "Analysis 101, part 3: Sharing your findings")

Analysis 101: Part 3 - Sharing your findings

Hi, welcome to analysis 101 video 3. Now that we've learned how to plan an analytical project and perform, the analysis will discuss best practices for interpreting an sharing your findings.

Learning goals

In this video you will learn how to summarize an interpret your data and share your findings. The key elements to communicating your findings are as follows. Select your essential findings. Summarize an interpret the results. Organize an assess reviews. And prepare for dissemination.

Steps in the analytical process

(Diagram of 6 images representing the steps involved in the analyze phase of the data journey where the first steps represent the making of an analytical plan, the middle steps represent the implementation of said plan and the final steps are the sharing of your findings.)

Going back to our six analytical steps will focus on sharing our findings. If you've been watching the data literacy videos by Statistics Canada, you'll recognize that this work is part of the third step, which is the analyze phase of the data journey.

Step 5: Summarize and interpret your results

Let's start by discussing how to summarize an interpret your results.

Tell the story of your process

(Image of the 4 parts to for the 5th step: Context - Evidence from other countries or anecdotal; Methods - Comapre millennials (aged 25-34) to previous generations; Findings - Millennials have higher net worth and higher debt then Gen-X; Interpretation - Mortgages main contributor to debt for millennials.)

Presenting your findings clearly to others is one of the most challenging aspects of the analytical process. Let's use the millenial paper as an example. First we started with the context where we highlighted previous findings for American millennials, which motivated our study on Canadian millennials. Then we discussed our data and methodology defining millennials in explaining how we compared them with previous generations. Then we walked through the key findings. The storyline, for example, we explained that well, Millennials had higher net worth than generation X when they were younger. Millennials were also more indebted. Finally, we interpreted our findings, digging deeper into the Y. For millennials, we found that mortgage debt, which reflects higher housing values, contributed to their higher debt load.

Carefully select findings that are essential to your story

You'll likely produce several data tables or estimates throughout your analytical journey. Carefully select the findings that are essential to telling your story. Revisit your analytical questions an select visuals that clearly help to answer these questions. Remember that your results are not the story, but the evidence that supports your story.

Summarize your findings and present a logical storyline

Once you've selected the key results, summarize your findings and present them according to a logical storyline. Identify the key messages. Often these messages will serve as subheadings in a report or study. Also, always make sure to discuss your findings within the broader context of the topic. You've done great work and you want people to remember what your analysis contributes to the literature. Creating a clear storyline will ensure that people remember your work.

Define concepts

(Text on screen: A millennial is anyone in our dataset between 25 to 34 years old in 2016)

As you may recall from video to project specific definitions of key concepts may have been established before starting your analysis. It's worthwhile to include any relevant definitions in your written analysis, like our definition of Amillennial. This will help the audience better understand your findings.

Avoid jargon and explain abbreviations

In your written analysis, avoid jargon. An explain abbreviations clearly. For example, instead of using a statistical term such as synthetic birth cohort, explain your results in plain language. Define any acronyms that you use, like CSD, which stands for senses subdivision at the earliest possible opportunity.

Maintain neutrality

(Text on screen: Subjective - Large/small, High/Low, Only/A lot; Neutral - Rose or fell by X%, Higher or lower by X times.)

Ensure that you're maintaining neutrality by using plain language and not overstating your results, or speculating when interpreting them. Avoid qualifiers like large, high, or only, which can be subjective and focus on explaining things using neutral language.

(Text on screen: Subjective - Large/small, High/Low, Only/A lot; Neutral - Rose or fell by X%, Higher or lower by X times.)

Here are some examples that were not neutral and were improved by letting the data tell the story. Instead of employment growth plummeted down by 2%. You can say over the previous quarter employment fell 2%. The largest decline in the past two years. The second statement maintains neutrality. Instead of Millennials are dealing with a significantly worse housing market and have a lot more debt, you can say median mortgage debt from Millennials age 30 to 34 reached over 2.5 times their median after tax income. Don't rely on exaggerations to make your point stay neutral. These statements are robust and supported by the data.

Expect to make mistakes

Expect that you will make mistakes. It's a normal part of analytical work. Remember that you're the person most familiar with your project, which puts you in an ideal position to identify mistakes. When you complete your preliminary draft, leave it alone for a few days and review it with fresh eyes. Don't be afraid to ask others for help in correcting your errors, and remember that learning from your mistakes will strengthen your analytical skills.

Step 6: Disseminate your work

Next, we're going to review the last step. Which is how to prepare your work for dissemination and communicate your finding successfully.

Ask others to review your work

An important part of preparing your work for dissemination is asking others to review your work. You can request feedback from a range of people such as colleagues, managers, subject matter experts and data or methodology experts.

Seek feedback on different aspects of your work

Ask your reviewers for feedback on different aspects of your work, such as the clarity of your analytical objectives, appropriateness of the data you've used, definition of concepts, review of literature, methodological approach, interpretation of your results and clarity and neutrality of your writing.

Organize and assess reviewers' comments

After receiving comments from your reviewers, organize and assess their feedback. Look for any concerns that are common across reviewers comments and determine which concerns will require additional analysis. Make sure to clarify anything that reviewers struggled to understand.

Document how you addressed reviewers' comments

Document how you've addressed each of the reviewers comments. If you're not able to address certain concerns, it's important to justify why. In some cases, your organization may require that you provide a formal response to reviewers comments. However, even if this is not required, it is a best practice to make note of the decisions you make when revising your work.

Preparing your work for publication involves many people and processes

Typically many processes and many people are involved in helping to prepare your analytical product for dissemination. At Statistics Canada, analytical products undergo editing, formatting, translation, Accessibility, assessment approval processes, and the preparation of a press release. You will want to consider their requirements for your work, whether it's a briefing note, an infographic or information on your organization's website.

How your work is published depends on your intended audience

How you work is disseminated will depend on your intended audience. You need to think about who the intended audience is. What do they already know? And what do they need to know for example the general public will want high level key messages while the media or policy analyst community will want more information visuals and charts. Researchers, academics, or experts will want details about your data, methodology and limitations of your work.

How your work is published depends on your intended audience: Media and the general public

For example, we often provide highlights visually through charts and infographics when communicating findings to the general public. For a study on the economic well being of millennials, the findings were communicated through Twitter, an infographic and a press release which summarized the key messages of the analysis.

How your work is published depends on your intended audience: Policy-makers

Other audiences such as policy makers may be interested in more detailed findings or a different venue where they can have their questions answered quickly. Results from the millenial study were shared with analysts and policy makers through a web and R the publication of a study with detailed results and other presentations.

How your work is published depends on your intended audience: Researchers, academics, experts

Findings are shared with researchers, academics or experts by publishing the analysis in detailed research papers or Journal articles in peer reviewed publications, as well as by presenting at conferences. This audience will be more invested in the specific details of. Work and knowing where the findings fit into the larger research field and knowledge base.

Communicating your work to the media requires preparation

Lastly, preparation is essential to successfully communicate your work to the media. Check to see if your organization offers media training. Prior to sharing your findings with the media, devote time to summarizing your main results and determining your key messages. Think about how to communicate your findings in simple terms. Anticipate potential questions and create a mock question and answer document.

Summary of key points

And that's a quick description of how to review and disseminate your work. First, tell the story of your process. Second, interpret your findings using clear an neutral language. 3rd, ask others to review your work and forth. Preparation is key to communicating your findings. Remember to always stay true to your analytical question while telling a clear story. Next, take a look at our case study, where we provide an example of the analytical process through the lens of a study about neighborhood walkability and physical activity.

(The Canada Wordmark appears.)

What did you think?

Please give us feedback so we can better provide content that suits our users' needs.

Analysis 101, part 2: Implementing the analytical plan

Catalogue number: 892000062020010

Release date: September 23, 2020

By the end of this video, you will learn about the basic concepts of the analytical process:

  • the guiding principles of analysis,
  • the steps of the analytical process and
  • planning your analysis.
Data journey step
Analyze, model
Data competency
Data analysis
Audience
Basic
Suggested prerequisites
Analysis 101, part 1: Making an analytical plan
Length
6:11
Cost
Free

Watch the video

Analysis 101, part 2: Implementing the analytical plan - Transcript

(The Statistics Canada symbol and Canada wordmark appear on screen with the title: "Analysis 101, part 2: Implementing the analytical plan")

(The Statistics Canada symbol and Canada wordmark appear on screen with the title: "Analysis 101, part 2")

Implementating the analytical plan (Analysis 101: Part 2)

Hi, welcome to analysis 101 video 2. Make sure you've watched video one before you start because we're diving right back in. Now that we've learned how to plan, an analytical project will discuss best practices for implementing your plan.

Learning goals

In this video you will learn how to implement your analytical plan. The key steps in implementing your plan include preparing and checking your data. Performing your analysis. And documenting your analytical decisions.

Steps in the analytical process

(Diagram of 6 images representing the steps involved in the analyze phase of the data journey where the first steps represent the making of an analytical plan, the middle steps represent the implementation of said plan and the final steps are the sharing of your findings.)

In the first video we went through how to plan your analysis. In this video will go through how to implement your plan. If you've been watching the data literacy videos by Statistics Canada, you'll recognize that this work is part of the third step, which is the analyze phase of the data journey.

Step 3: Prepare and check your data

The first step in implementing your plan is to prepare and check your data. Preparing and checking your data will make your analysis more straightforward and rigorous.

Define your concepts

Start by defining your concepts in our previous example that examined the economic status of millennials, we needed to determine how we would define millennials. In the literature, we found no official definition for that generation, but many different recommendations. It's important to make an analytical decision that's meaningful and defendable, and to apply it consistently and documents your decision. In this paper, Millennials were defined as those age 25 to 34 in 2016 in age group that aligns with our typical definition for young workers.

Clean up the variables and the dataset

Now that the concepts are clear, will start digging into the data. Start by cleaning and preparing your data set. You'll want to rename the variables so that they are meaningful an formatted in a consistent manner. For example, rather than using the name Var 3, which is confusing, we rename the variable highest degree earned, which is much clearer. The effort you invest at this step will serve to make your life easier as you proceed with your analysis, especially if you document your decisiones well.

Check your data

(Table of presenting the economic well-being research by generation where the left column represents the generational groups. The middle columns and right column represent the average age in 1999 (Gen-Xers = 26 years old & Millennials = 14 years old) and 2016 (Gen-Xers = 43 years old & Millennials = 66 years old), respectively.)

At this stage, check your data to ensure that it's of the highest quality. For our example, we should check the average age by generational group to make sure there is no issue with how age is calculated. The average age for Generation X is 26 years old in 1999 and in 2016 their average age is 43. This makes sense, however. Well Millennials are 14 years old on average in 1999. They are 66 on average in 2016. In this case we should check our program code, examine the day to fix the error, and document why this error occurred.

Data checks throughout your analysis

To add rigor to your analysis, there are data checks that you should perform at different stages. In the early stages you can check the raw data to ensure that it's clean and ready for analysis. You can also check the frequency distributions of the variables to ensure that the data are consistent with past datasets. Then as you are checking the results of your analysis, you can verify whether your findings are consistent with the literature. All of this work should be done in well documented code that is saved for future reference.

Step 4: Perform the analysis

The second step in implementing your plan is to perform the analysis. As discussed in video one, your analysis should be planned out when creating your analytical plan. So once your data are clean and prepared, you're ready to perform the analysis.

Implementing your plan

Performing the analysis should be straightforward. If you created a clear analytical plan and cleaned and prepared your data appropriately. You should conduct your analysis as planned and as discussed previously, check your results as you go to ensure that the data and methods you are using are producing valid results. Another benefit of checking your results as you go is that you can flag unexpected findings.

Be flexible

If you have unexpected results, this may be due to an error in the data, or it might be some unexpected research finding. Be flexible and adjust your analytical plan to further investigate results that are not in line with your expectations or do not match up with theory. We will see an example of this in the case study video where additional analysis was necessary to disentangle a complex relationship.

Summary of key points

And that is a quick overview of how to implement your analytical plan. This involves preparing and checking your data. And then performing the analysis. Throughout this work, make sure to document your decisions. In the next video you'll be learning about interpreting and sharing your work.

(The Canada Wordmark appears.)

What did you think?

Please give us feedback so we can better provide content that suits our users' needs.

Video - Geoprocessing Tools (Part 1)

Catalogue number: Catalogue number: 89200005

Issue number: 2020017

Release date: November 24, 2020

QGIS Demo 17

Geoprocessing Tools (Part 1) - Video transcript

(The Statistics Canada symbol and Canada wordmark appear on screen with the title: "Geoprocessing Tools (Part 1)")

So today we'll introduce geoprocessing tools, which enable layers to be spatially overlaid and integrated in a variety of ways. These tools epitomize the power of GIS and geospatial analysis, facilitating combining feature geometries and attributes, whether it be assessing spatial relations, distributions or proximities between layers and associated variables of interest. We'll demonstrate these tools with a simple case-study, examining land-cover conditions near water features, also known as riparian areas, in southern Manitoba. These tools can be reapplied and iterated with multiple layers, enabling you to combine, analyse and visualize spatial relations between any variables, geometries and layers of thematic relevance to your area of expertise.

So first, the Merged Census Division feature from the AOI layer was selected and subset to a new layer – CAOI – since Selected Features is not available when running tools as a batch process.

In addition to the interactive and attribute selection tools covered previously, there is one final type – Select by Location. This selects features from the input layer according to its spatial distribution relative to a second layer and the selected geometric predicates. The predicates define the particular spatial relations used when selecting features. We'll use Intersects, Overlaps and Are Within. Multiple predicates can be used, provided they do not conflict. And processing times increase with the number of selected predicates. At the bottom, the alternative selection options are available in the drop-down, but we'll run with the default.

So most selected features match the predicates but two spatially disconnected features were also returned due to a common attribute. So now we'll use the Multipart to Singlepart tool to break the multi-polygons into separate features, running with Selected Features Only.

Now we'll use a slight variation of Select by Location - Extract by Location. Instead of creating feature selections in our input layer, this will generate a new layer. So matching the predicates and comparison layer to those used in Select by Location, we'll click Run. In addition there is also Join by Location, which enables fields from the second layer to be joined to the first according to the predicates and the specified join type – as one-to-one or one-to-many. So these by Location tools enable features to be selected or extracted and field information joined between layers according to their relative spatial distributions.

So now we'll merge the land-cover 2000 layers into one file with the Merge Vector Layers tool. Open the Multiple Selection box and select the four land-cover files. We'll also switch the Destination Coordinate Reference System to WGS84 UTM Zone 14 for spatial analysis. Click run with a temporary file. So merge can be applied to vectors of the same geometry type. It works best when layers contain the same fields and cover distinct yet adjacent areas – making the land-cover layers highly suitable. Two additional fields specifying the originating layer and file path for each of the features is included in the output.

While Merge is running, we'll reproject the watershed layer to the same Coordinate Reference System for consistency in our spatial analysis.

Now we'll join the provided classification guide with the class names to the merged output, using the Joins tab. So code is the Join Field and COVTYPE - the Target Fields. We'll join the Class field and remove the prefix. Now we can run the merged layer through the Fix Geometries tool to accomplish two tasks simultaneously. First it will fix invalid geometries – critical for adding spatial measures and applying geoprocessing tools, while also permanently joining the Class fields. The process may take a few minutes to complete.

 So now we'll rename the Reprojected and Fixed layers to PTWShed for projected tertiary watershed and FMLC2000 for fixed merged land-cover 2000. This will enable us to use the autofill settings to populate the file paths and names when running Clip as a Batch Process. So open Clip from the Toolbox and click Batch Process.

As we've covered, the Clip tool helps standardize the extent of analysis for multiple layers to an area of interest, or reduce processing times and file sizes in a workflow. The inputs can be of any geometry type while the Overlay Layer is always a polygon. Features and attributes that overlap with the Overlay Layer are retained, with the Overlay Layer acting like a cookie cutter on the input.

So select FMLC2000and PTWShedas the input and select CAOIas the Overlay Layer. We can then Copy and paste it into the next row – which we could repeat for as many entries as required. We'll click the plus icon and copy PTWShed for the Input to prepare this layer for an upcoming demo. Here we'll use Manitoba Outline as the Overlay layer. For the output files we'll store them in a Scratch folder, for intermediary outputs in our workflow which can then be deleted at the end of part 2 of the demo. Enter C for the filename, and click Save and then use Fill with Parameter Values in the Autofill settings drop-down. This adds a C prefix to our existing layer names. We'll store the last file in the Geoprocessing folder so that it is retained. Click Run and we'll pick back up once completed. The process takes around five minutes to complete.

So with the clipped layers complete, load them into the Layers Panel. I'll move them back into the Processing Group for organization purposes and then zoom in on the layers.

We can load the provided symbology file to visualize the different land-cover classes.

Then we'll add an area field to the clipped land-cover file. Call it FAreaHA for field area, using a decimal field type with a length of 12 and a precision of 2. We'll reuse these parameters for adding subsequent numeric fields. Enter the appropriate expression - $area divided by 10000.

Now we'll use Select by Expression to isolate 'Water' features using "COVTYPE" = 20 or "Class" LIKE 'Water' – and then click Select Features.

Now we'll generate a Buffer around the selected features to begin creating the Riparian area layer. There are many Buffer tools available in the Processing Toolbox – which we'll demonstrate in Part II – here using the default tool.

We'll check 'Selected features only' box and enter 30 for the distance – a common riparian setback in land-use planning and policies. Change the End Cap Style to Flat and check Dissolve Results, so that any overlapping buffers are merged to avoid conflating total area estimates. Run with a temporary output file. We'll rerun the tool toggling back to the Parameters and changing the distance to 0, to output Water features as their own temporary layer – reducing processing times for the next tool.

Buffer tools can be applied to any vector geometry type. And they are used to assess the proximity of features to those in other layers. We can also use buffers to facilitate combining our geometries and attributes with other layers – like buffering lines or points to use them as a difference layer. The buffer contains the input layer's attributes, which can be used for further analysis. The outputs are often applied with other geoprocessing tools for further examination.

So we'll rename the outputs, naming the first B30W and the second LC2000Water, to facilitate their distinction.

Zooming in on the buffer, the input water features were also included in the output geometry. Since we are not interested in water features but the land-cover conditions around them we'll run the water buffer through the Difference tool using LCWater2000 as the Overlay Layer to retain only the buffered area. So difference is the opposite of Clip – retaining only input features that do not overlap with the Overlay layer. Like Clip – the input can be any geometry type, while the overlay layer is always a polygon. Difference can be used whenever we are interested in features that do not overlap with a specific polygon, such as areas external to a certain drive or distance from hospitals or farm fields, roads or grain elevators not impacted by historical flooding. So click Run and we'll continue once the output is complete.

Toggling the water layer off, we can see that the Difference has retained only our 30 metre buffer. So now we've successfully generated our riparian area layer but need to follow up with the Intersection tool – running it twice to extract watershed codes and land-cover classes to our layer. Intersection retains the overlapping feature geometries of the input layers and any selected attributes of interest in the Fields to Keep parameter. If geometry types differ between layers, the first layer's geometry is used in the output. Thus, Intersection can help combine variables of interest from multiple layers.

For the first run we'll use the Difference and clipped watershed layers as the inputs to assign watershed codes to the riparian buffer. This will enable us to examine land-cover conditions by watershed in Part II of the demo. And for PTWShed check the sub-basin code field in the Multiple Selection box. For the Difference layer, we'll select an arbitrary field for the Fields to Keep parameter – here selecting the "layer" field, clicking OK and then clicking Run. This process takes around 5 minutes and we'll continue when complete.

Within the Attribute Table we can see watershed codes have been successfully assigned to the riparian layer. Now we'll run the tool again, using the intersect as the Input and the clipped land-cover file as the Overlay layer to integrate the land-cover features in the riparian areas. We'll retain the watershed code field from the first layer and the "Class" and "FAreaHA" fields from the land-cover. We'll save it to file, storing it in the main geoprocessing folder and calling it RipLC2000 for riparian land-cover 2000. If the tool fails, use Fix Geometries tool and rerun the Intersection with the fixed outputs. We'll pick back up after the layer is created, which may take up to 20 minutes.

With the riparian land-cover layer loaded copy and paste the style from the clipped land-cover to visualize the different feature classes occupying these areas. Now we've successfully combined the riparian buffer by watershed with the land-cover layer. And for the final component of Part I we'll add four new fields with the Field Calculator, specifically the intersected area in hectares, to determine the area of each land-cover feature within the buffered riparian area. Use the same parameters and expression as applied for creating the FAreaHA field.

So next we'll calculate the percentage of each feature within the 30 metre buffer, to assess the relative distribution of the original features within the riparian setback and isolate any potential violating land-uses. We'll call the field PrcLCinRip, for percent land-cover in riparian area, with the same parameters as the previous fields. Expanding the fields drop-down, we'll divide IAreaHA by FAreaHA and multiply by 100.

The next two fields are to create an identifier which combines the subwatershed codes and land-cover class fields which we'll use to aggregate and assess riparian land-cover by watershed. First is an FID field or FeatureID, which we'll use for the Group_By parameter when using the concatenate function. Leave the parameters in their defaults and double-click the @row_number expression.

Now we can use Concatenate to combine our fields in creating the ID. This is extremely helpful for further processing and analysis, such as distinguishing and rejoining different processed layers to original features or aggregating datasets by different criteria. So we'll change to a text field type with a length of 100 and call it "UBasinLCID".

So type concatenate in the expression box – specifying the function to apply, and then open bracket and double-click SUBBASIN in the fields and values drop-down. Using the separators and adding a dash in single quotes will help separate the codes and class fields for interpretability. As noted, the FID field is used for the Group_By parameter, writing group underscore by, colon, equal sign and then double-clicking the FID field.

We can see the combined fields in the output preview. Given the number of features, the concatenated function can take up to 30 minutes to create. After it's complete, ensure to save the edits to the layer and the project file with a distinctive name for use in Part II of the demo.

(The words: "For comments or questions about this video, GIS tools or other Statistics Canada products or services, please contact us: statcan.sisagrequestssrsrequetesag.statcan@canada.ca" appear on screen.)

(Canada wordmark appears.)

Analysis 101, part 1: Making an analytical plan

Catalogue number: 892000062020009

Release date: September 23, 2020

By the end of this video, you will learn about the basic concepts of the analytical process:

  • the guiding principles of analysis,
  • the steps of the analytical process and
  • planning your analysis.
Data journey step
Analyze, model
Data competency
Data analysis
Audience
Basic
Suggested prerequisites
N/A
Length
8:13
Cost
Free

Watch the video

Analysis 101, part 1: Making an analytical plan - Transcript

(The Statistics Canada symbol and Canada wordmark appear on screen with the title: "Analysis 101, part 1: Making an analytical plan")

Analysis 101: Part 1 - Making an analytical plan

Hi, welcome to analysis 101 video one making an analytical plan.

Learning goals

By the end of this video you will learn about the basic concepts of the analytical process: the guiding principles for analysis, the steps in the analytical process and planning your analysis. This video is intended for learners who want to acquire a basic understanding of analysis. No previous knowledge is required.

Analysis at your organization

Take a second to think about analysis at your organization. What role does analysis play? Are you and your colleagues producing briefing notes for senior leadership? Are you writing reports for clients or for your website? Are you doing more technical or descriptive work? Does your organization have guiding principles that you should be aware of? You'll be taking these into consideration when you plan your analysis.

Steps in the analytical process

(Diagram of 6 images representing the steps involved in the analyze phase of the data journey where the first steps represent the making of an analytical plan, the middle steps represent the implementation of said plan and the final steps are the sharing of your findings.)

On this slide you can see that there are six main steps in the analytical process and each is related to making a plan, implementing that plan or sharing your findings. We will explain the main activities that you will need to undertake within each step. If you've been watching statistics candidates, data literacy videos, you'll recognize that this work is part of the third step: the analyze phase of the data journey. This diagram is the backbone of our analytical process. We will come back to it in each of the videos in this series.

Step 1: What do we already know?

For this video and planning, your analysis will start by understanding the context and investigating what we already know about a topic. Start by ensuring you fully understand the broader topic and the context surrounding it, and think through the following questions. What do we already know about the topic? Has one of your colleagues already done a similar exercise? Start by reviewing any previous work done on the topic. Once you've read up on the topic, you can identify the knowledge gaps. What is missing in the previous work? This will help you realize how your projects add value.

Example

To make sure you understand these steps, let's go through an example together. This is from a study on the economic well being of millennials. This study was motivated by a lack of information on financial outcomes for Canadian millennials.

Millennials-Context

When we began work for this study, we knew that Millennials were often stereotyped by the media's still living in their parents basement. Spending too much on takeout food, and so on. We also knew that a study by the United States Federal Reserve Board had shown that American Millennials had lower incomes and fewer assets than previous generations had at the same age. What were the knowledge gaps? Despite anecdotal media reports on millennials, we knew that there wasn't a detailed study assessing the economic well being of Canadian millennials.

Millennials-Relevance

Why is our analysis relevant? Well, housing affordability and high debt levels have been identified as concerns for younger generations earlier in their life. From this we knew the topic was relevant for policy makers. Journalists and Canadians will return to this example later.

Step 2: What do we already know?

Back to our analytical process. The next step is to define your analytical question.

What is your analytical question?

How do you define your analytical question? Very clearly state the question you are trying to answer and use plain language simply. This means using vocabulary that an eighth grader could understand. You might have one main analytical question followed by some supporting questions. Why is your question relevant? Why should we care about your work? Define the value that your analysis adds either to your organization, your client, or to our understanding of the topic.

Plan your analysis

Now that you have a relevant question, how will you answer it? This is the perfect time for you to put together an analytical plan which provides a road map for answering your analytical question. You will need to think about the context of your topic and how you will answer your question. What data and methodology are needed to answer your question? You will also need to think about how you will communicate your results, whether through a briefing note, analytical paper, infographic or presentation.

Identify your resources

Now that you have your analytical plan, think about your resources. Feedback is an essential element of your analytical journey and you should leverage input from colleagues at every step. Typically we will put together a short plan for colleagues and management to review. Maybe some of your colleagues have expertise on the topic you are working on. Colleagues might also have expertise in the data you are using or your methodology. Our colleagues are often in the best position to provide tips and feedback and to help us work through problems.

Millennials-Analytical question

For our example about the economic well being of Canadian Millennials. Our main analytical question was: Are Millennials better or worse off than previous generations of the same age in terms of income levels, debts, assets, and net worth? Given the level of interest on Millennials and debt levels, we wrote a short, analytical paper that answered this question.

Your analytical journey

Remember it this way analysis is like taking a canoe trip. You need a good plan. You should map out where you are going and how you will get there. That's your analytical plan. You will also need a strong analytical question, solid data, and good methodology. That's your canoe.

Remember: Define your analytical question

The key takeaway from this video is to remember to develop a clearly defined analytical question even with a great topic and high quality data you cannot produce good results without a well defined question.

Summary of key points

To summarize, the analytical process can be viewed as a series of steps designed to answer a well defined question. Once the topic has been defined, the next step is to create an analytical plan. And always incorporate the feedback you receive during the planning stage of your analytical project. Before the next video, take a few minutes to identify two analytical questions and think through why these questions are relevant for your organization. Stay tuned up next. We'll share tips on how to implement your analytical plan.

(The Canada Wordmark appears.)

What did you think?

Please give us feedback so we can better provide content that suits our users' needs.

Video - Semi-Automated Mapping in QGIS with the Atlas Panel

Catalogue number: Catalogue number: 89200005

Issue number: 2020016

Release date: November 24, 2020

QGIS Demo 16

Semi-Automated Mapping in QGIS with the Atlas Panel - Video transcript

(The Statistics Canada symbol and Canada wordmark appear on screen with the title: "Semi-Automated Mapping in QGIS with the Atlas Panel")

So following up from Creating Maps in QGIS, today we'll discuss using the Atlas Panel in the Print Layout to rapidly generate multiple maps. The Atlas panel uses a specified 'Coverage Layer' to define the geographies of the outputs. Today we'll use it to map population dynamics in Census Metropolitan Areas - or CMAs for short - across Canada - effectively semi-automating the map production process.

So once again the preparation steps are provided in the video description – but to summarize quickly, we used the one-to-one join procedures to link the population table to the Census Tract layer. The Refactor Fields tool was then applied to save to a permanent file with correctly attributed field types. We also dissolved the Census Tract layer using the Census Metropolitan Area Name (CMANAME) field to create the Coverage Layer for our Atlas, and with the fill colour set to fully transparent, outline the Census Tracts within our main map group.

The cartographic Census Subdivision layer was then run through the Fix Geometries tool, dissolved by the Provincial Unique Identifier and run through the Multipart to Single Part tool, ensuring that all features were separate entries within the attribute table. Area fields were added to it, and the Lakes and Rivers polygon, using the Field Calculator, and the Select by Expression tool used to subset features with areas greater than 2500 and 500 square kilometres respectively and grouped together to comprise our Inset Map.

The Labels applied to our coverage and province layers had similar settings. So to quickly summarize - in the Formatting subtab for the coverage Layer we specified to wrap text on the dash character. A text buffer was also applied – with the coverage layer set to 75% Opacity. The Placement was set to Horizontal (slow) ensuring the legibility of the text-based labels. And in the Rendering tab only draw labels which fit completely within the feature was checked and discourage labels from covering the feature's boundaries was selected with the highest weight applied. So now we can toggle off the Prep Layers group.

Now in the Print Layout I used the add shapes tool, specifically Add Rectangles, to divide the layout for the map items, which were then locked within the Items panel. The alignment tool on the Actions toolbar was then used to ensure that added items were placed above the rectangles. I've also already added many of the mandatory map items – including the additional information, scale bar, legend and title. The title uses an expression, including a generic text prefix within single quotes followed by the vertical separators and then Census Metropolitan Area name field to label by metropolitan area, which will update automatically once our Atlas is generated.

Just a quick tip for map item formatting - if greater control was needed – the Print Screen or Snipping tools could be used to export, externally format and re-add items, such as the legend, diagrams or table, as a picture.

On that note our North Arrow is still missing. So rather than using the add Arrow and Text label function, let's add it from Picture this time. Clicking and dragging across the desired location in the Print Layout, we can then go to the Item Properties Panel and expand the Search Directories. And once loaded select the desired icon. I'll alter the Fill Colour to be darker, to ensure its visibility against the main map, and then clicking Back, we'll also switch the placement to Middle.

So now we can add both maps simultaneously – placing the main map in the larger box of the layout and the inset map in the smaller box on the right. The main map is currently being rendered over our north arrow, so once again we'll select it and use the Lower in the Alignment tools to ensure the north arrow is visible.

While they're rendering I'd also like to highlight that the scale-bar for the main map is set to Fit Segment Width as opposed to the Fixed Width used in the previous demo, which will update the scale-bar according to the size of the census metropolitan area being mapped.

Now for our second map, let's add a Grid, expanding the drop-down and clicking the Plus icon – and then select Modify Grid Properties. We'll also change the CRS to WGS 84, entering 4326 in the system selector, so that we can show coordinates in decimal degrees. We'll specify an interval of 2 degrees, which at the moment adds many lines, but will be more appropriate once we generate the Atlas. Check the Draw Coordinates button. The format of the coordinates can then be selected in the drop-down, here we'll leave it with the default. We'll specify to show latitude only for the Right and Left parameters and longitude only for the Bottom and Top. At the bottom of the panel we'll change the precision for the Grid units to 1.

Back in the Item Properties Panel for the Inset Map, we'll also add an Overview – clicking the plus icon and specifying the map being overviewed in the drop-down, which is Map 1.

Now we can generate the Atlas. So in the Atlas drop-down on the Menu-bar select Atlas Settings. And within the Atlas panel check Generate Atlas and specify the Coverage Layer – in this case DCMAAtlas or the dissolved Census Metropolitan Areas. We'll specify the field used for the Page Name, in this case the CMANAME field and use the same field for Sort by. This will sort them alphabetically when we preview our Atlas. Uncheck the Single Export option, as we want each metropolitan area to be a separate map and select the desired output file format. Additionally we'll change the output filename to something more intuitive than just the feature IDs. So we'll switch 'Output' to JCTPop and then click on the Expression box. In the Variables drop-down, we will replace featureID with @atlas_pagename, which we set to CMANAME field, so the outputs will be named according to the census metropolitan area. This did involve a trade-off– requiring the entries within the CMANAME field to be reformatted – removing periods, slashes, question marks and other symbols that caused erroneous filenames, which would lead to the atlas output failing. So having replaced these characters I can just click OK.

Now we can Select Map 1 and within the Item properties Panel check the Controlled by Atlas box. For the main map we'll specify 5% for the Margins Around the Feature. In the main interface we can now toggle off the inset map group. And back in the Print Layout select the main map and in the Item Properties Panel check Lock Layers and Lock Styles.

Repeating with Map 2, we'll check controlled by Atlas once again and enter a larger margin of 750% to ensure the broader geographic location is shown. Then in the main interface toggle off the main map group and back in the layout select Lock Layers. And this is so that the inset map does not show the layers of the main map and vice versa.

Now on the Atlas toolbar we can select the Preview Atlas icon. So we can toggle through the different CMAs alphabetically, or select ones of interest from the drop-down. The next metropolitan area is Barrie. So as you can see, the title, grid and scale-bar update rapidly, while the maps, particularly the Inset Map takes longer. This is likely due to the detail of the cartographic boundary file, combined with the broader extent being mapped within the inset.

I ran the Atlas output earlier – as we can see scrolling through the maps, most are appropriately formatted and ready for use as supporting figures or stand-alone documents. Relatively few maps require manual editing and individual export to maintain intuitive values for map items such as the scale or grid intervals. For example, for Edmonton we would want to use a larger interval for the Grid coordinates, such as 5°. And similarly, for Granby, we would want to alter the scale to Fixed Width and enter 10 for a more intuitive break value. Then we could use the export procedures from the making maps demo to individually export these specific maps. On the whole, the Atlas Panel has facilitated rapidly mapping multiple locations and variations in attributes of interest with relatively little input or effort.

Toggling back we could now select another CMA of interest, such as Drummondville – or one of the outputs from the Atlas that needed edits such as Edmonton. Then we could select the Inset map, re-expand the Grids drop-down and click the Modify button – updating the Grid Interval for X to 5 degrees. We could then resize the North Arrow to ensure it's not obscuring the main map features, and then individually export this map using the procedures covered in the previous demo.

So with the Export Atlas tool we can specify the for..file format to use. The same formats from the single export options are available. It is a good idea to create a separate directory for the output maps. Then specify the output resolution and click Save to run. We won't actually run the output as it's a time-intensive process, taking around 35 minutes.

The last thing that we would want to do is save the print layout as a template for further use, such as reapplying for map production in the next census collection period or to generate maps with a different variable of interest at the Census Tract level.

So use the Atlas panel with a coverage layer to rapidly and easily generate multiple maps for particular areas of interest. Save the template for re-use in examining other variables of interest or applying in another time-period. Apply these skills to your own areas of expertise and datasets of interest for semi-automated map production.

(The words: "For comments or questions about this video, GIS tools or other Statistics Canada products or services, please contact us: statcan.sisagrequestssrsrequetesag.statcan@canada.ca" appear on screen.)

(Canada wordmark appears.)

Video - Making Maps in QGIS with the Print Layout (Part 2)

Catalogue number: Catalogue number: 89200005

Issue number: 2020015

Release date: November 23, 2020

QGIS Demo 15

Making Maps in QGIS with the Print Layout (Part 2) - Video transcript

(The Statistics Canada symbol and Canada wordmark appear on screen with the title: "Making Maps in QGIS with the Print Layout (Part 2)")

So using the Layout Manager, we can reopen our Layout from part one. And now we'll cover adding some additional optional map items. When used judiciously, these items can help enhance the interpretability and aesthetic of a map. One of the final procedures we covered in Part I was locking the Layer and Style of Layers for our Main Map, meaning that changes to the main interface will not impact its appearance in the Print Layout.

So the first item that we'll add is the Inset Map. Back in the main interface we'll toggle the main map group off, toggle the inset on and zoom to the provincially aggregated layer. Now back within the Layout we can add the Inset map, with the Add Map to Layout tool – and left-clicking and dragging across for its placement within the Layout.

We'll then add another scale-bar item - for Map 2 this time. Select Numeric from the Format drop-down. And we'll place it below the Inset map, altering the placement parameter in the display drop-down to Center and adjusting the placement within the Layout. To ensure an intuitive and interpretable scale ratio once again we'll enter a fixed scale value for the Inset Map using the Data Defined Override drop-down, in this case entering 55 million in the Expression window.

Now let's add a picture. So with the tool engaged click and drag where it should be placed within the Layout. Now we can load the image from our Directory by clicking the triple dot icon. It can then be resized and placed within the Layout as needed.

For the final optional item, let's add part of an attribute table to the layout. Click the Attribute Table icon and drag in the layout for its positioning. We can then specify which layer to use, selecting our subset layer – JMBCDPop - in the Layers drop-down. Clicking the Attributes box, we can then specify which fields should be included or removed from the table. So we'll remove the Census Division Unique Identifier, Census Division Type, the provincial fields, as well the Total Private Dwelling counts, percent change and area fields. So we'll rename the remaining fields in the Heading column, which can be of any length. So we'll rename them to Name, Population 2011 and 2016 written in full, Percent Change, Density, Rank (CA) for the national level and Rank (MB) for the provincial level. We can then specify the fields to sort the table by. Here we'll use the CDName field. We could also add additional sorting rules, much like in Excel, here using the provincial population rank.

So now that the table is added we can nudge it down within the layout, and as we resize it within the layout the number of features in the table changes. We could also control this using the parameters in the Feature Filtering drop-down.

In the Appearance section, select Advanced Customization. Check even rows and we'll alter the colour formatting to be a light grey to distinguish the individual rows in the table. In the Show Grid drop-down uncheck the Draw Horizontal Lines box.

Now we'll add a Node Item, using the Add Polyline function, to the Print Layout. We'll use the lines to form the horizontal border lines for the attribute table and header row. So left click twice for the beginning and end of the line, and right-click to complete. We'll then edit the length of the line to ensure that it perfectly matches the width of the attribute table. Then we can sse.. copy and paste the first line and place it in the other two locations. Once this is done we can select the items by clicking and dragging over the Layout, and once again using the Group tool on the Actions toolbar and lock their position in the Items Panel.

With all map items formatted, we can now export the map. So the Map can be exported as an image or as a .pdf. The image file format enables it to be rapidly added within a document as a figure or supporting information, while the .pdf can be used to share the map with others in a widely accessible but protected file format. Here we'll export the map as an image. Navigate to the desired directory and provide an output filename. Then we can enter the desired resolution. In general, 300 dots per inch will suffice for most applications. But say we want to include the map on a poster, then we could use a finer resolution of 600 or even 1200 dots per inch as required. Then click Save, and the export procedure takes about a minute to complete.

When it is completed, click the Hyperlinked Filename at the top of the Layout. And then we can open and examine the output map. If we need to make any adjustments we could easily return to the layout, incorporate them and repeat the export procedure.

So in this demo we explored the principles, procedures and tools for creating a map in the Print Layout. Specifically, users should now have the knowledge and skills to: Distinguish mandatory and optional map items; Use available tools in QGIS's main interface and Print Layout to prepare map data Add map items to the Print Layout and alter their properties using available panels, such as using the lock layers and Style functions in the Item Properties panel to add inset maps, and using the group and lock functions in the Items panel to fix item positions in the Layout.

Finally you should also feel comfortable saving and exporting finalized maps. So apply these skills to your own areas of expertise to create well-balanced, easy-to-interpret maps.

(The words: "For comments or questions about this video, GIS tools or other Statistics Canada products or services, please contact us: statcan.sisagrequestssrsrequetesag.statcan@canada.ca" appear on screen.)

(Canada wordmark appears.)

Video - Making Maps in QGIS with the Print Layout (Part 1)

Catalogue number: Catalogue number: 89200005

Issue number: 2020014

Release date: November 23, 2020

QGIS Demo 14

Making Maps in QGIS with the Print Layout (Part 1) - Video transcript

(The Statistics Canada symbol and Canada wordmark appear on screen with the title: "Making Maps in QGIS with the Print Layout (Part 1)")

Hello everyone. Today we'll learn how to create maps in QGIS using the Print Layout– which is a separate window from the main interface used for mapping. Specifically we'll cover navigating the window, and using its tools and panels; adding map items, and distinguishing those that are mandatory versus optional; and saving and exporting a map.

Cartography, or map-making, blends the science and art of GIS. Maps are powerful tools for conveying information to a wide audience. The creator chooses which features are included, how they are visualized and how to best convey the information. Maps should be intuitive and readily interpretable. Important factors to consider are similar to those introduced in the vector visualization tutorials, including:

What is the main theme or message of the map and who is the target audience?

This helps define essential layers for the map. And generally you should exclude peripheral layers that may overcrowd your map or message.

Second, is the visualization logical and facilitate the distinction of layers or features?

And similarly, is the level of detail or generalization within the layers suited to the map scale?

Finally, have you selected an appropriate projection for the location of your map?

With the joined division and aggregated provincial layers from the one-to-one join by attributes demo loaded in the Layers Panel, division features in Manitoba were selected and subset to a new layer – JMBCDPop - which will be our main map. Selected features were also run through the Dissolve tool using the province name field to create the MB outline layer, which was grouped with the province layer to create the Inset Map. So inset maps just show the broader geographic location and context of a main map. The groups in the Layers Panel will help us add our two maps separately within the Print Layout, important since the Layout is actively tied to the Canvas in the main interface.

So with the map layers created and grouped, we now need to establish our visualizations. Instead of using the Layer Properties Box, today we'll use the Layer Styling panel, right-clicking on an empty toolbar area and selecting it from the drop-down. The panel contains the main visualization tabs from the Layer Property Box, and layers can also be selected from the drop-down at the top – enabling the rapid visualization of multiple layers.

So we'll apply a graduated symbology to the Pop Percent Change field, using the Spectral Colour ramp, Pretty Breaks as the Mode and 8 classes. We'll also change the precision to 1. Then in the Labels tab, we'll once again specify the Percent Population Change field to use for labelling. And in the Formatting subtab check Formatted Numbers and change the decimal places to 1. Since the visualized field includes negative values we could also check 'Show plus sign' if desired, but here we'll leave it unchecked.

We'll add a small text buffer around the labels using the default values. And for the Placement, we'll select Free (Slow). This will rotate the labels to fit them within the feature – but will still be interpretable since we have NEVER selected for show upside down labels in the Rendering tab. We'll also check only draw labels for fit completely within the feature and discourage labels from covering the feature's boundary. If you've noticed I haven't clicked Apply yet, because the Live Update box is checked, meaning the edits are being applied on-the-fly as they're being entered.

Now we'll select the Manitoba outline layer from the drop-down. Switch back to the Symbology tab. Click on Simple Fill and Fill Colour, which will alter the Opacity to 0% or fully transparent. Then we can change the stroke colour to a dark red and enter a width of 0.75 – creating the outline of the main map.

Closing the Styling Panel we'll copy and paste the style from our subset layer – selecting all categories – - to the aggregated province layer to ensure consistent visualization of population changes across the two layers and levels. Now we can toggle off other layers leaving only the main map group. If a bookmark was created it can now be used, or in this case since our main map is one layer, the Zoom to Layer tool. And we can use other zoom tools to refine the scale of the Canvas as needed. The scale value at the bottom of the interface is approximately 1 in 7 million.

So to access the Print Layout window, click the New Print Layout icon on the Project Toolbar. The layout manager icon to the right can be used when there are existing layouts that you want to access for further use. Clicking on the New Layout icon we need to provide a name – which we'll call Making Maps in QGIS. Obviously a more specific title is helpful to distinguish different layouts once multiple maps have been created.

So the Print Layout appears as such. There are a variety of panels on the right-side of the window, the most important being the Item Properties Panel, where all items in the Layout are formatted –defaulting for the selected item. On the left-hand side are a variety of tools to add different map items to the Layout.

So the first mandatory component is the main map – so we'll click the Add New Map icon and then click and drag to place it within the Layout. There are two interaction icons. The Select/Move Item, enabled by default is used to move, place and resize items within the Layout, while the one below it, Move Item Content, applies to Map Items only and can be used to alter the Canvas location and scale from within the Print Layout. Map items may take a moment to update when the formatting parameters are changed.

So here we can specify the properties for Map 1, such as setting the scale to that of the Canvas, or entering a specific value for the scale in the Main Properties drop-down. However, the scale will adjust automatically if the map item is resized, which is not ideal. So to prevent this we can click on the data defined override box, select edit, and enter the desired scale in the expression box, in this case seven and a half million. Now we can adjust the size without it impacting the scale. Engaging the Move Item Content tool we'll move the canvas for our main map feature so that it is fully visible within the Layout. And we'll also enter -5.0 for the Map Rotation to remove the tilted appearance of Manitoba, which is tied to the applied map projection.

The Guides panel can help us place items within the Layout. So we can click the Plus Icon and specify a distance for indentation. The guides then appear as dotted red lines. In addition there are a variety of alignment and distribution tools on the Actions toolbar to facilitate laying out and distributing items to create an aesthetically pleasing, well balanced map.

The second mandatory item is the North Arrow, which can point towards true, magnetic or grid north – particularly relevant for mapping at higher latitudes. So we'll use the Add Arrow function. Left-clicking twice, to define the start and end of the arrow – drawing a vertical line – and then right-clicking to finish. The North Arrow does not automatically point towards North, so expand the Rotation drop-down and enter -5 to match the applied rotation to our Main Map. We'll add a label above it, replacing the default text with capital N. Clicking on the Font box we can alter the size to a more appropriate value, 20 in this case should suffice and the alignment to Center and Middle. Once again we'll rotate the label.

Dragging across the Layout we can select both items and group them using the Group tool on the Actions toolbar. Now we can resize and reposition them within the Layout. In the Items panel we can then toggle Items on and off, as well as lock their position within the Layout. So now clicking and dragging across the Layout only the main map is selected.

The third mandatory item is a scale-bar - enabling real-world distances between features to be approximated from the map. So click the scale-bar icon and click within the Print Layout. We can select the map to which it applies, and the format. Here we'll stick with the default style - single box. Depending upon the map scale we could change the desired units to use in the drop-down. However, for our map kilometers is most appropriate. We'll include 4 segments to the Right of 0, and replace 75 with more interpretable break values, entering 200 in this case. We can use the arrows on the keyboard to nudge items in a direction of interest within the layout to facilitate their positioning.

The fourth mandatory item is a legend to interpret and distinguish the mapped features. By default the legend includes all layers in the Layers Panel. We can include a generic legend title at the top if needed, but here we'll leave it blank. Then we'll uncheck the Auto-Update box to enable the editing functions and ensure formatting changes are retained. We can reorder legend entries with the arrows and use the minus icon to remove them. So here we'll remove the Manitoba outline and aggregated provincial layers. And right-click on our main map group title and select hidden to remove it from the Legend. We can also rename the layers in the Legend Entries drop-down by double left-clicking. So we'll rename JMBCDPop to Percent Changes. We can also edit value ranges from within the Layout by expanding the Layers drop-down and double left-clicking. Here we'll change the upper and lower break values for the legend to less than -4.0 and greater than 14.0. We'll also remove its background and alter its placement to align with the scale-bar and the main map.

The fifth essential component is a title. It should be simple and quickly convey the map content, including the theme, location and level. So here we'll call it Percent Population Changes in Manitoba (2011-2016): Census Divisions. We'll change the font size to 34, specify the alignment and resize the text box accordingly.

So the final mandatory component are some additional text items that specify the map projection, creator and source references - particularly important when the map will be released as a stand-alone document. We can enter the information manually or use expressions to semi-automate the entry of this information. So we'll enter Prepared by: Insert Name or click the Insert Expression button and in the variables drop-down double-click user full name. Then type Projection colon, NAD83 Statistics Canada Lambert open-bracket. Reopening the expression box once again we can double-click project_CRS in the variables drop-down. So as shown, the manually entered information and expressions are being automatically formatted in the Item Properties panel. Then we need to specify the source references, typing datasets accessed from Statistics Canada. Finally, include an additional expression to include the date created and the program used. So we'll use the concat function and commas to separate the different components. So first - open single-quote and type Created on, colon, space close quote, then type todate with $now enclosed by brackets. Add another comma, reopen single quotes, space, type with QGIS, space, close quote and finally double-click @qgis_short_version in the variables drop-down.

So with map 1 formatted and all mandatory components added, we will select the main map and Lock the Layers and Layer Styles in the Item Properties panel so that changes in the main interface do not affect its format or scale in the Layout window. Now we can click the Save icons at the top. The first will save both the project in the main interface as well as the Layout, which we can then access with the Layout Manager for further edits. Conversely the second tool can be used to save a particular Layout as a Template for repeated use, which we'll cover in a follow up demo. So click the first Save Icon, which we'll then access to discuss optional map items in Part II.

(The words: "For comments or questions about this video, GIS tools or other Statistics Canada products or services, please contact us: statcan.sisagrequestssrsrequetesag.statcan@canada.ca" appear on screen.)

(Canada wordmark appears.)