When it comes to language, there are many distinct levels of proficiency, ranging from being able to identify a language you overhear to being able to speak it fluently. Having these distinctions is important because it lets us collect data that we can use to get a general sense of a what languages people speak in a sample and see how they gauge their proficiency and if it falls under the same category as ours. However, it is often difficult to see where a person falls on the spectrum. Read more
In order to properly assign random identification numbers to those who contributed specific sets of data. I truly wanted randomly computer-generated numbers not just a 1-10 count. However, I did not know how to ask Excel to do this for me, so I consulted the internet. I googled “randomly generated numbers excel” and got a few promising articles and set to work learning. One of the best videos I found was from a youtuber known as Doug H. who specializes in excel and its functions, he is amazing! What most of the articles asked was to use the (=RAND) command which I found worked perfectly to generate a single random number, however I needed a lot more. Since the function needed a number minimum and maximum, I went with the classic 1-100; (=RANDBETWEEN(1,100)).
Previously in Linguistic Anthropology for Fall 2017, my fellow students and I learned about the US Census and had David Kraiker, a Data Dissemination and GIS Specialist from the Census, talk to the class about what the organization does. As 2020 is fast-approaching, so does the new census to be given out to people residing in the United States. Every decade since it’s inception, the U.S. Census Bureau formulates a new questionnaire for people to answer. The purpose is to collect accurate demographic information and data that can be beneficial for policy making and record keeping. Data collected is publicly available and informs everything from the building of new schools to managing hospitals. As noted in recent news reports and blogs, they have also been used electorally to gerrymander districts. The important and daunting task of data collecting has a wide-reaching impact; what kinds of concerns are raised then when changes are made to the questions asked? A widely reported and controversial change is the addition of a question pertaining to participants’ citizenship status. The addition of the citizenship question for 2020 is now very likely as the Supreme Court is poised to allow the question into the survey.
On 3 December 2018 the Language Maps, Language Clouds team had the opportunity to interview David Kraiker of the US Census Bureau who has visited our classroom in the past to share free ways to use ACS language-related data. Below is an overview of the conversation; boldface sections summarize the LMLC team’s questions. To listen to the audio file, click here.
What made you want to work for the Census? David started working at the US Census Bureau after a stint at a map publishing company. He was attracted by better compensation, but he continues to work for the Census Bureau because he is able to help with encouraging the use of data in the hope of improving society. “What makes me want to work for the Census Bureau…I do more for society in this job than I did when I was creating atlases. People are using the data that we have, I hope for good purposes and it’s a way of improving society”.
One of the concepts learned in Linguistic Anthropology Fall 2017 was the idea of a global language which is a language spoken by many people across the world as it holds a significant weight to it in government, education, or other social areas. Currently, the global language is English, more specifically, American English, with hundreds of millions of speakers. It’s not surprising as English is a common means of communication in business and scientific journals but how did it become a global language?
A mini history lesson needs to be said here as British English was the global language for a while. The phrase “The empire on which the sun never sets” was absolutely true given the colonial reach of the British Empire on every continent. Such a global presence and vast amount of resources meant that they were not only a military power but a social power too. Through their own policies they instituted mandatory teaching of English in some parts of the Empire. Since they were also a regional power, people were in a way coerced to learn the language of those who were dominating them.
In one of our textbooks for Linguistic Anthropology, Language in Society, the author Suzanne Romaine dedicates a part of chapter 2 in exploring the topic of language death. Language death occurs when a language ceases to be spoken and used by people, rendering it non-existent in terms of communication between others.
Language death is a scary concept as it can really happen to any language. What causes this to happen has been debate by linguists, from minority communities being suppressed and overridden by majority force in society, to a phenomenon called “language shift” where a community starts off as bilingual but gradually loses their native tongue.
One of the most fascinating concepts learned in Linguistic Anthropology Fall 2017 is that of the language of the powerful and the powerless. Powerful language is characterized by being more active, assertive, and commanding while powerless language is more hesitating, unsure, and can be characterized by self-doubting. To give an example, a powerful statement would be “Let’s go to Chili’s this Tuesday” while a statement marked by powerlessness can be characterized as “Uh I guess I’m in the mood for Chili’s but I wouldn’t mind going somewhere else, what do you think?”. Notice the difference? The first sentence is more of a “I will” while the second is more doubtful but it also relates to the way it’s uttered. Tone is all too important, while going over the question part of the statement, did you imagine it being spoken in a higher tone with an unsure inflection? Those are points to be mindful of when detecting whether a person is speaking with a powerful or powerless speech.
Data is fun! Excel is a friend with wonderful shortcuts! Those words have been rarely if ever uttered in the English language but it’s actually true in a way. As the merits and cons of using Excel has been reported before in the blog, I figured it is good to carry on that tradition. Working with self-reported data in this study is an experience that I can ever forget and I believe I can say the same for my fellow student researchers’. The data that we worked with provides insight into how people come into contact with various languages through their life experiences. It’s intimate in its own way as you really get to see and understand people’s lives and shared stories.
But then comes the transcribing and coding part of research which is an interesting ride on its own. You see, Excel, our primary mode of transferring the data on flashcards, is a very handy tool but we had to make sure that ALL the data was copied over. Read more
One of the great advantages of being a part of this research is learning the amount of languages a person knows, understands, speaks, or just able to identify. You learn that your classmates are bilingual, trilingual, or even quadrilingual! The knowledge of being able to communicate in more than one language is a fascinating subject for linguists and was discussed heavily in our Anthropology class. Indeed, this whole research is based on delving into this area and obtaining more information about it.
People who are bilingual though, or others who know more than two languages, aren’t as uncommon as one expects, especially considering a person’s geographical location. The interesting part about gathering data from Seton Hall students is that the campus comprises a mixed ethnic/racial population with students coming from diverse backgrounds. Information on this shows a range of about 45%–50% of students identifying as belonging to non-white minority backgrounds! So to discover that the majority of data collected indicates that students are overwhelmingly versed in more than one language is astounding, especially given students understanding languages that aren’t as well-known as others, such as Uzbek as documented from one student.
The field of linguistics has had many different perspectives on the topic of language based on a time period’s available evidence. As it was taught in Linguistic Anthropology, this field went through many viewpoints, such as evolving from historical linguistics to descriptive linguistics.
Our knowledge of linguistics keeps evolving with time and accurate evidence. Nothing can be a more apt example of this then the debate over how language forms between two great scientists, B.F. Skinner and Noam Chomsky. To start off with, Skinner is more widely known in the field of Psychology as one of the pioneers of Behaviorism but as mentioned previously, he also theorized about language development. He spoke on how children learn language from the environment around them, mainly in a behaviorist framework. Basically, as a child learns new language skills, social influences will use reinforcement to help their learning move along, such as a child saying the word “book” and their teacher nods and rewards them for saying the right word and identifying the right object being focused on.
One of the biggest challenges in working with qualitative data such as the very self-directed and open ended responses that our participants provided, is interpreting said statements in a way that generates useful data. I have come to observe that in this particular study, the relatively vague direction prompt that was used when administering the survey (something to the effect of “make a statement about each language that you’re aware of”) yielded responses that were either very informative or very (very) vague. Because we asked participants to hand write their responses on index cards, as opposed to having someone else interview and record their answers, or having them use a digital answer form (like the one found elsewhere on this blog), we also had to contend with some instances of unclear or illegible handwriting. Though deciphering somebody’s handwriting ranks relatively low on the scale of challenges that crop up with qualitative research, it can be nonetheless frustrating.
Fooling around with Tableau, I found this cool feature that literally creates clouds! Take a look at this.
The picture of my desk above illustrates the main issue that we had for the blog during the summer of 2017.
When we all met for our summer meeting, the main problem we had was that we either couldn’t access our Google Drive to get our information or couldn’t connect to the wifi. So to get around not being able to connect to the wifi, Laura suggested that she could get the data from her laptop since Prof. Quizon couldn’t access the drive. However, another problem came up. The laptops we use for this blog is either our personal computers or the laptops the school provides. To log into the laptop the school provides, you need to log into your student email and to do that, you need to have wifi access. But for some odd reason, Laura’s laptop could not recognize the campus wifi.
After finally being able to connect to the wifi and getting all the data we needed, we all discussed issues that came up at that point.
One of the main issues, besides connecting to the internet and getting our data, was how to code some of our data into the excel because all our data was qualitative data. What we decided to do and how we did in detail it is on a different post but it all came down to figuring out how to categorize something into something else.
The second issue, which is something more personal to me than what it is for the others, is how being an alumni affects the productivity of the blog and internship. One of the main issues is just getting onto the blog because we all use our student emails to log in. Not being a student anymore complicates things. The quick fix was to switch to my personal email and then relinquish admin rights after I hand over to the next group.
The final issue touches the first issue but in more detail. It had to do with how to categorize something that doesn’t have a category. For example, how would you categorize learning a language from a hymn or song? Would you say the person can speak and recognize it but not understand it? This issue was brought up by Stephen when he realized that some students who took the survey said they can sing and recognize a language but not actually read or understand it.
The easiest and fastest way we decided to address this problem is just to make a special category for these cases since it only affected about five or six entires. After going through all our issues and trying to figure out a way around them, we all had pizza and left to enjoy the July weather.
While commuting between New York and New Jersey one evening, I tuned into the radio station 93.9 NYC as they started a technology portion of the show. The theme was language, and the first story was on using an translating app to navigate China (linked below), and the second was on this odd contraption called ‘the Voder’. Introduced at the 1939 World’s Fair, ‘The Voder’ was created by Homer Dudley and produced by the Bell Telephone Laboratory. This machine synthesized the first electrical human speech by producing the acoustic components of our speech. A woman ‘works’ the machine almost like a piano to control the various components of the Voder that allows it to ‘talk’. It even sings “Auld Lang Syne” (a song that many of us today can’t even sing the lyrics to), which I find amazing, but at the same time creepy. Although this technology may seem dated compared to our ‘Siri’ and apps that can produce electronic language so fluidly and accurately, this was an important and interesting step forward in the realm of artificial language production. I wonder what amazing things we will invent today that will improve the communication and interaction of (or completely frighten) our children.
Listen to the story on ‘The Voder’ Here: http://www.wnyc.org/story/the-voder-the-first-machine-to-produce-human-speech/
Translation in Apps Story: http://www.wnyc.org/story/finding-a-pedicure-in-china-using-cutting-edge-translation-apps
Photo taken from : https://120years.net/the-voder-vocoderhomer-dudleyusa1940/
Here’s an article discussing vowels in English as well as other languages:
An extremely interesting point is number two, which discusses how the most common vowel sound in English doesn’t even have it’s own letter. Can you guess what it is?
When creating our database, we had to input a large amount of information into each column for each index card. In this, I love the simple yet amazing ability to freeze the first row of the spreadsheet. Of course, the same can be done for columns.Whether we were on index card 2, 20, or 120, we could clearly see the column title of what type of information we were inputting.
Another function of excel that was awesome was the use of pivot tables. Pivot tables allowed us to quickly sort and count our data to give us an idea of what our data would look like once uploaded for data visualization. For instance, with a pivot table we could see how many speakers were attributed to each language. We could also see who input what data, and sort by what type of information. For example, if one of the team member had clacked on my name, they could see how many cards I input were English. However, we decided not to keep it as part of our data set as the external visualization program we used allowed us to see the same information when we uploaded our data, even allowing clickable charts, maps, etc.
A final function that was greatly appreciated was the ability of an Excel spreadsheet to be uploaded onto Google drive, shared, then downloaded as an Excel file. This helped greatly, as the team felt most comfortable with Excel over the Google spreadsheet. Though I’m not sure if this should be attributed to Google or Microsoft (or both), this was none the less a great function.
But with the best, also comes the worst…
The biggest problem for me when starting this project was using qualitative data as opposed to quantitative data. When I had previously learned how to use an earlier version of Microsoft Excel early in high school, we worked with quantitative data and functions. In that, I found it a bit challenging in the beginning to just be putting in names and words instead of mathematical problems and functions. However, I was surprised to find that when working in a column, excel will pop up with a cell fill-in for a word previously used. So say I was typing in the last name ‘Smith’ for a second or fifth time, I would have only typed up to the ‘m’ and excel would suggest “Smith” to put into the cell.
Where this turns sour for me is that if you skip a cell down and start typing into the second cell underneath, it no longer has the fill in as an option. I REALLY wish that this carried over while in the same column. When it came to really long or odd names, I really wished that excel would still automatically suggest a word fill in, even when you skip the cell of the next row.
When trying to visualize our data, we ran into a problem. Where we had input just countries or regions (i.e. Atlantic Midland, Inland North, etc.) as the language’s origin, the visualization technology we were using could not figure out how to map the languages with just the country. In that, we had to go back and put in the capital of each country of the languages origin, and designate a ‘capital’ for different types of English (i.e. North Jersey vs. South Jersey English), which resulted in a more accurate depiction of the locations of each language origin. Overall, I wish that Microsoft Excel would improve on it’s compatibility with other software and websites. Though I understand there’s much time, thought, and agreement that needs to be done for this, companies like Amazon and Paypal work with other websites and services to create a smoother use of services. Therefore, Microsoft does have the ability to work better with other companies’ programs, and I wish that both parties would work to do so in the near future.
Both of the above images do not belong to me. ‘Spirited Away’ is the property of Studio Ghibli/Disney and were found here: giphy.com/search/spirited-away-gif
Our research is free for anyone to use. However, we wanted a clear way to express this. Creative Commons is a nonprofit that licenses your research and pictures. When choosing which license, I chose attribution. Put simply, anyone can use our research as long as they give us credit.
Creative Commons does an excellent job of making their site user friendly. The process was simple and easy. I clicked the “Share your work” tab at the top, and filled out the questionnaire. When I was finished they gave me code and told me to post it on our site. At first, I put this on our homepage However, it just looked like code. After a little more trial and error, I put it in the “text” option in the footer of our site. After I did this, the code became a clickable Creative Commons link. Overall, I am very impressed with Creative Commons and highly recommend them for anyone who is trying to license their work.
When deciding what pages to include in our Menu, I had to really think about what pages are on regular websites. I decided that Our Mission Statement should be a our homepage so that when you arrive at our site, you know about our project and our goals. I revised the mission statement several times and finally decided upon the finished product you see now.
My second thought was having a page explaining what exactly we mean by Language Maps and Language Clouds. Dr. Quizon thankfully authored this page with working links.
As a team, we decided to rename the blog page to “The Project”. This was a unanimous decision. We wanted to take people step by step through our process.
Our “Contact Us” page is for anyone who has questions, comments, or wants to use our research which is covered by Creative Commons. The “Contribute” page will be an open forum for anyone who would like to add their languages to our research. We are working now with a WordPress expert who is going to build our questionnaire which will input directly into an Microsoft Excel spread sheet, already coded.
We encourage you to check back soon and contribute your own languages!
In my opinion, the best function of WordPress is the edit shortcut when you visit your site. This is extremely helpful in the final stages of production because you can view your site, catch a typo or another minor problem, and hit edit. This takes you back to that page or post on that dashboard. It eliminates several steps that you would have to do without this shortcut making editing fast and easy.
The worst function of WordPress is not being able to save a draft of a page. Being a student, I would work on the blog at odd times, sometimes in between classes. Even though a page I was working on was not ready to be viewed by the public, I would have to Publish it just to save my progress. I am a bit of a perfectionist so I found this frustrating to Publish and incomplete version.
Personally, I think my biggest obstacle with creating the blog was choosing a theme. WordPress has many options, so finding a theme wasn’t the problem, finding one that had all the capabilities I wanted was. The first theme I picked which I really liked was called “Vertex”. But there were a few features that I wasn’t thrilled about. First, it took the secondary title “A TLTC Blog” and made it look like button. However, if you clicked on it nothing happened. This was a bit misleading for our viewers. This button was also in the center of our blog page and there was no way to move it, edit it, or delete it.
The second problem with this theme was that it didn’t have an option for a header image. When I first picked this theme, I thought that blank space at the top included the header image but it didn’t.
After some search, I found “Accelerate blog theme to be clean and user friendly. I was also with the rest of the research team when I chose the theme so it was nice to have their thoughts as well.