## Week 02 Question Set a) **What do you feel was the most impressive thing you did in class last week was?** I think the most impressive thing the last part of the blast assignment where we used curl to download some sequences, and then used blast to assign accession numbers to those sequences in just a few lines of code! I especially like the curl function since it is so much simpler than going to a website, clicking download, dragging that item to a folder, and renaming it manually. b) **What is your weekly goal for making progress on your project? What is the next step?** The next step for my project is acquiring the data and figuring out where it is stored and how to access it. If I'm not able to access the data from my collaborator, I will also need to find a similar dataset to use instead. c) **There were two readings this week not from the textbook, meant for two different audiences. Which reading did you get the most out of and why? Do you have any questions regarding the Journal of Shellfish Research paper?** I think I got the most out of the shellfish research paper because it clearly laid out things that need to be considered and the steps that need to be completed for a genomics project from start to finish. As someone relatively new to this field, this was helpful and accessible for me, although it was broad. The multifactorial models reading seems like it could be really practically helpful when analyzing data, but parts of it went over my head a bit and I didn't feel like I had the background to understand everything. One question I had about the shellfish paper was: how do you know what parameters for quality you should use when trimming? d) **What is your favorite thing about markdown and why?** I like how everything in markdown is standardized, so things look similar from user to user and project to project. This way, once you are familiar with it, everything is easier to read. e) **What is the difference between `curl` and `wget`? When would you used one over the other?** `wget` has a recursive option in which you can download multiple links on a webpage. It can also download links from webpages that are linked to the url you input, following links several pages deep. This can be helpful if there are a lot of links on a multiple webpages that you need to download, but it can also lead to you downloading extra files you don't need if you don't limit it. `curl` writes files to a standard output and supports more protocols than `wget` because it is a library. Because `curl` only downloads the webpage specified, it is most useful for single downloads as well as if you are anything other than HTTP or FTP.