I’ve been collecting data regarding the battery life cycle of my Apple’s magic mouse and wireless keyboard. I thought it’d be good to post this data here so that you dear reader can draw your own conclusions about how long should the battery life last.
This is interesting data to look at because I like statistics and it’s good to analyze data. It helps you predict what to expect… Obviously the batteries’ lifespan changes according to how much time you spend using these devices. In my case I tend to use the computer 8 hours/day on average.
Just now I decided to search for a neat app ( Finch for Mac ) so that I can track the time I pass working on the computer. This will for sure give me more accurate data in the near future. For now I’ll have to take this data for granted…
Take a look at the following Google spreadsheet:
As you see the keyboard has a great autonomy and there were periods in which I didn’t work as much as I’d like on the computer because the mouse charges used to last 3 weeks. This denotes periods of inactivity. Now that I’m doing more intensive work the mouse battery lasts only 1.5 weeks. This is half the time previously taken to drain the battery.
For the record: I also own an Apple’s battery charger. Recharging the batteries is an easy task. Once more I’m contributing to the environment.
All these Apple devices are great IMHO as it’s the case with everything made by Apple!
The only “complaint” is that I think the mouse should have a greater autonomy. Maybe this will be the case with more recent technology iterations.
Today I’m showing a practical use of a function called Right that can help you in a lot of situations.
Consider this sample spreadsheet:
I’m using the above spreadsheet to keep track of my progress during online theoretical practice tests for my first driver’s license.
I like math so I wanted to know how many tests people have taken or how many tests have been served by the server since my last test.
The Right function is being used in the column named ‘’Tests since last’’ starting at the second practice test I’ve taken. For example, the actual formula in cell C4 is this:
C4 = right(B4, 7) - right(B3, 7)
Let’s break this thing and explain each piece:
B4 contains the URL that points to the practice test I took. In this case the URL is this:
As you see, there’s a number at the end of the URL. This number has 7 digits of length and is a sequential that identifies each test. This allows me to go to the test to review what I did right and what I did wrong at a later time.
Well, to answer that intriguing question I need to subtract this number from the number present in the URL just above cell B4, namely B3. Now look at the formula in cell C4 above. It does just that. The right function is giving me the last 7 characters of the URL. Last because it gets the characters from right to left. If you wanted to get the first 7 characters you’d use the left function instead.
The right function has this form: RIGHT(text, number). It defines the last character or characters in a text string. Text is the text of which the right part is to be determined. Number (optional) is the number of characters from the right part of the text.
Easy to understand, isn’t it?
Let’s do the math making the substitution of values:
Now I know that from the time I took the first practice test on 11/25 to the second test on the same day but at a different time 800 tests have been served. This is more of a curiosity metric than anything. I like math anyways. :o)
This was a simple and practical use of the right function that came in handy in this situation.
Whether you’re in the office, at home, or on the road, Microsoft Office Web Apps help you get more things done virtually anywhere and anytime. These convenient online companions to Microsoft Word, Excel, PowerPoint, and OneNote offer you an easy way to access, view, and edit documents directly from your Web browser.
My intent is to highlight a really important feature that Google is currently missing in its service: cell references (line numbers and column letters) in online published documents.
Take the same Excel spreadsheet from last post but now hosted in the cloud by Office Excel Web App:
As I used cell references throughout that last post to explain things, it would be really useful if Google spreadsheet had cell references so that users could spot the cell I was referring to instantly.
You can see above that Excel Web App does show cell references making the overall user web experience better.
For example, above I allow you to edit (AllowTyping=True) the contents of this spreadsheet on the fly. I also allow you to sort the table contents (AllowInteractivity=True). The modifications you make are valid only in this session. If you close or reload the page your changes won’t be saved. This is really nice. A lot of possibilities emerge from this.
Changes you make are reflected in the spreadsheet. Try for example adding one more TL class in cell D5. The cell color will change to yellow. You’ll see that cell I17 will have its value changed. It’s going to be -1. The formulas are being updated automagically. You can even see the formulas by double clicking a specific cell that has a formula.
One thing I noticed is that color formatting rules applied to cells are kept as you define in Excel Desktop App (Excel Web App counterpart). I tried to configure these rules in Excel Web App but I couldn’t find a menu option that would allow me to change them.
It’s important to mention that although great part of Excel Desktop App features aren’t available for configuration online, Excel Web App still honors Excel Desktop App configurations you make in your workbook. This is cool!
If you're curious, this is the code I'm using to embed the spreadsheet in this post:
I’ll show you a practical use of the CountIf function that you can apply in both Microsoft Excel spreadsheets and Google Docs spreadsheets.
Let’s start with this sample spreadsheet:
I’m using the above spreadsheet to keep track of my progress during the course for my first driver’s license.
The CountIf function is being used in column Remaining classes for each discipline. For example, the actual formula in cell I17 is this:
I17 = H17 - countif(D3:I14, B17)
Let’s break this thing and explain each piece:
H17 is a fixed value = 18. I know it beforehand. In this case for the Transit Legislation discipline I must have a total of 18 classes.
The countif function has this form: COUNTIF(range, criteria). So the above countif(D3:I14, B17) is telling us that:
D3:I14 is the range of cells we’re searching and B17 is what we’re after… The range encompasses all the disciplines’ acronyms from 11/23/2010 to 12/9/2010 for all the timetable.
B17 is the discipline’s acronym, in this case it is TL.
Easy to understand, isn’t it?
Doing the above with countif I can tell how many classes I have scheduled for that discipline. Then I just subtract this value from the total necessary classes to have a picture of how many classes I still have to go through to fulfill the necessary ones.
Let’s do the math:
CountIf returns 18 because LT appears 18 times in the timetable. Making the substitution of values:
… if I add rows using NPOI in C#, rows added under the table won't be automatically included in the table, and my chart is not updated the way I would like it to be. How can I work around this problem?
I tried to simulate the problem with a simple spreadsheet and I was getting the same problem stated by Zip, that is, if I added one row just beneath the last row in the table, such added row wasn’t included in Excel’s data table and consequently the chart bound to the table wasn’t updated to reflect the new data.
To workaround this problem, let’s consider the following spreadsheet shown in Figure 1:
Figure 1 - NPOI with Excel Table and dynamic Chart
As you see we have a simple Excel data table with a players column that represents the name arguments of the chart, 4 columns for the months that form the category labels arguments (X axis) and the values arguments for the months going from Jan through Apr (Y axis).
Using NPOI to insert a new row in the table shown above we do the following:
// Creating a new row... 0 is the first row for NPOI.
HSSFRow row = sheet.CreateRow(5); // Row 6 in Excel
// Creating new cells in the row... 0 is the first column for NPOI.
row.CreateCell(1).SetCellValue("Eve Paradise"); // Column B
row.CreateCell(2).SetCellValue(4); // Column C
row.CreateCell(3).SetCellValue(3); // Column D
row.CreateCell(4).SetCellValue(2); // Column E
row.CreateCell(5).SetCellValue(1); // Column F
Figure 2 - NPOI with Excel Table and dynamic Chart - Adding a new row
Figure 2 shows us the problem stated by Zip in his comment. The new row we just added wasn’t included in the table. The chart that is linked to the table won’t update because it isn’t aware of the new row.
How to workaround this problem? That’s the question!
After playing with this case for 4 hours I’ve found a way of doing what Zip asks for.
Here’s how I did it:
Expand your Excel data table to row 10. I expanded only 4 rows just to show you how to workaround NPOI’s current limitation.
To expand your table, click in the minuscule handle in the lower-right corner of the cell occupying the lower-right corner of the table. This handle gives you a way to expand the table. Usually, it’s easier just to add data and let Excel expand the table - what doesn’t work with NPOI. But if you want to add several new rows or columns all at once, the handle is a good way to do it.
After expanding your table save the spreadsheet. It’ll be the template spreadsheet used to create new spreadsheets.
Figure 3 shows how the above spreadsheet looks like when the table is expanded to row 10:
Figure 3 - NPOI with Excel Table and dynamic Chart - Expanding the Table
We can see that row 6 added using NPOI is now part of the table because we expanded the table. The chart now shows the new data but we got a new problem: the chart shows empty (blank series) that are the reflection of the the empty rows we have on the data table - take a look at the chart’s legend for example and you’ll see squares that represent nothing.
How to get over this? Well, we just need to filter the data in the table as shown in
Figure 4 - NPOI with Excel Table and dynamic Chart - Filtering Data (blank series)
Filter out players removing the blank rows by unchecking (Blanks) circled in red in Figure 4. Doing so the chart will reflect the change showing only the filtered data as you see in Figure 5:
Figure 5 - NPOI with Excel Table and dynamic Chart - Filtered Data (no empty rows)
Now we have an Excel data table that is filtered (take a look at the funnel symbol) in the Player column. Other difference is that the rows that contain data are marked in blue. Although we have only 4 rows of data being displayed, our table has indeed 8 rows of data because we expanded it. The other 4 rows are hidden because they were filtered for not having any data yet.
Positioning the mouse cursor within the Excel data table, I’ll add a Total Row (option circled in red) in the table so that I can summarize data the way I want for each column as shown in Figure 6:
Figure 6 - NPOI with Excel Table and dynamic Chart - Adding Total Row
With this Excel template spreadsheet we can now use NPOI to fill our sheet with more 4 rows of data. Let’s do it. This is the code I used:
// Forcing formula recalculation so that the Total Row gets updated sheet.ForceFormulaRecalculation = true;
After filling the spreadsheet we get the result shown in Figure 7:
Figure 7 - NPOI with Excel Table and dynamic Chart - Chart updated automatically/dynamically
This is the workaround! :o)
The rows added with NPOI now are part of the table and are shown in the chart.
As a last hint: remember to expand your Excel data table to the number of rows you think your spreadsheet will store so that the rows added with NPOI get included in the table and the chart gets updated.
Again this is a good proof of what free software as is the case of NPOI can make for us. Even when dealing with more elaborated concepts as is the case of Excel tables and charts NPOI makes it easy to get the job done.
I wish that the next version of NPOI does what Zip wants automatically, that is, recognize rows added under the last row of an Excel table. At least we could have a parameter to let the user define if s/he wants the row to make part of the table or not.
Hope you enjoy this post.
Visual Studio 2008 C# ASP.NET MVC Web Application You can get the Microsoft Visual Studio Project at:
Recently I had to implement some code to create an Excel spreadsheet/report using C#.
The task was: given an Excel spreadsheet template - a .XLS file (with formulas, pivot tables, macros, etc) I had to fill some data in one of the sheets of the spreadsheet and send this modified spreadsheet back to the user requesting such an operation (Excel report).
The following attests the need for Excel nowadays:
Excel has long been recognized as the de facto standard when it comes to presenting management reports. The unique combination of great calculation engine, excellent charting facilities, pivot tables and the possibility to perform “what if” analysis, make it the “must have” business intelligence tool. by John Tunnicliffe
I had a great time while studying the possible ways of doing what the task asks for.
It appears to be a simple task at first but as the time passes by you get to know that this is not the case, well, till the moment this blog post was written at least, I think. :-)
Firstly I tried automating Excel through COM automation, but as the application was meant to be used in a web context it is not recommended to use automation. Why? Because COM automation for Excel is not thread safe, that is, EXCEL.EXE was not constructed to be used by concurrent users accessing the same process, in this case EXCEL.EXE; besides, Microsoft Excel must be installed on the server what is not always possible.
Microsoft does not currently recommend, and does not support, Automation of Microsoft Office applications from any unattended, non-interactive client application or component (including ASP, ASP.NET, DCOM, and NT Services), because Office may exhibit unstable behavior and/or deadlock when Office is run in this environment.
I’ve just experienced the above. EXCEL.EXE insisted in being alive in task manager even after processing the data and being closed in code. Each call to process a spreadsheet opens an EXCEL.EXE process on the server. With such EXCEL.EXE processes don’t being closed as they should you get lots of those processes on memory which could overload the server.
Do not use COM automation if you are developing server-side code.
After struggling with COM automation I finally got to know ExcelPackage which works with Office Open Document Format (OOXML). It can read an .XLSX (Microsoft Excel 2007) template and create another .XLSX file based on such template, giving you lots of possibilities to work with the template copy.
I’ve gotten really happy because I had found a way of doing what the task was asking for but with a minor detail: ExcelPackage works only with .XLSX file format letting the .XLS (Microsoft Excel 2003) format out of the game. Well it turned out to be a big impediment because the client (software buyer) wouldn’t allow us to install the famous Microsoft Office Compatibility Pack for Word, Excel, and PowerPoint 2007 File Formats on user machines so that users could open and save OOXML file formats even using Microsoft Office 2003 suite.
Discovering ExcelPackage was good but it didn’t do the trick that implies the use of an .XLS template.
I got back Googling again and strived to find an open source library that would allow me to do what I wanted. After some time I finally discovered NPOI. Wow, it could read the .XLS template and generate the end result I wanted. Great. I downloaded NPOI 1.2.1 for .NET 2.0 binaries immediately and started playing with it to see what it could really do.
OK, after this short story, I’ll show you how to use both open source projects (ExcelPackage and NPOI).
I’ve created a new ASP.NET MVC project as can be seen in this picture:
In the Content folder I’ve placed the template spreadsheets. In the Libs folder I’ve placed the DLLs necessary to use both ExcelPackage and NPOI open source projects.
The controller that interests us is the one called ExcelWriterController:
The methods that handle the creation of the spreadsheet are: ExcelPackageCreate and NPOICreate.
For each controller action (method) there’s a corresponding view that renders the UI to the user. Those views are the ones shown inside the ExcelWriter folder: ExcelPackage.aspx and NPOI.aspx.
This is the Home Page of the Excel Writer MVC Application - take a look at the tabs (ExcelPackage and NPOI) that lead you to the View pages:
Each view has a button which when clicked calls the corresponding action method on the ExcelWriterController.
Let’s see the code that goes into the ExcelPackageCreate method:
/// <summary>
/// Creates a new Excel spreadsheet based on a template using the ExcelPackage library.
/// A new file is created on the server based on a template.
/// </summary>
/// <returns>Excel report</returns>
[AcceptVerbs(HttpVerbs.Post)]
public ActionResult ExcelPackageCreate()
{
try
{
FileInfo template = new FileInfo(Server.MapPath(@"\Content\ExcelPackageTemplate.xlsx"));
FileInfo newFile = new FileInfo(Server.MapPath(@"\Content\ExcelPackageNewFile.xlsx"));
// Using the template to create the newFile...
using(ExcelPackage excelPackage = new ExcelPackage(newFile, template))
{
// Getting the complete workbook...
ExcelWorkbook myWorkbook = excelPackage.Workbook;
// Getting the worksheet by its name...
ExcelWorksheet myWorksheet = myWorkbook.Worksheets["Sheet1"];
// Setting the value 77 at row 5 column 1...
myWorksheet.Cell(5, 1).Value = 77.ToString();
// Saving the change...
excelPackage.Save();
}
TempData["Message"] = "Excel report created successfully!";
return RedirectToAction("ExcelPackage");
}
catch(Exception ex)
{
TempData["Message"] = "Oops! Something went wrong.";
return RedirectToAction("ExcelPackage");
}
}
Let’s see the code that goes into the NPOICreate method:
/// <summary>
/// Creates a new Excel spreadsheet based on a template using the NPOI library.
/// The template is changed in memory and a copy of it is sent to
/// the user computer through a file stream.
/// </summary>
/// <returns>Excel report</returns>
[AcceptVerbs(HttpVerbs.Post)]
public ActionResult NPOICreate()
{
try
{
// Opening the Excel template...
FileStream fs =
new FileStream(Server.MapPath(@"\Content\NPOITemplate.xls"), FileMode.Open, FileAccess.Read);
// Getting the complete workbook...
HSSFWorkbook templateWorkbook = new HSSFWorkbook(fs, true);
// Getting the worksheet by its name...
HSSFSheet sheet = templateWorkbook.GetSheet("Sheet1");
// Getting the row... 0 is the first row.
HSSFRow dataRow = sheet.GetRow(4);
// Setting the value 77 at row 5 column 1
dataRow.GetCell(0).SetCellValue(77);
// Forcing formula recalculation...
sheet.ForceFormulaRecalculation = true;
MemoryStream ms = new MemoryStream();
// Writing the workbook content to the FileStream...
templateWorkbook.Write(ms);
TempData["Message"] = "Excel report created successfully!";
// Sending the server processed data back to the user computer...
return File(ms.ToArray(), "application/vnd.ms-excel", "NPOINewFile.xls");
}
catch(Exception ex)
{
TempData["Message"] = "Oops! Something went wrong.";
return RedirectToAction("NPOI");
}
}
One drawback of the ExcelPackage library is that it must create a file on the server. There are some modifications to the library that enables you to create the template copy on memory and send it to the user as the NPOI library does. Take a look at the ExcelPackage’s discussion page at CodePlex and specifically this thread: Why create Excel spreadsheets on the server?
The great thing about NPOI of course is that it enables you to work with the template in code and then send a copy of the spreadsheet directly to the user. The template remains intact and the user receives a modified copy of the template which contains the data processed by the application.
With NPOI when you click on the Create Excel report button you get the download dialog window:
With ExcelPackage you’d have to get the path of the file created on the server, in this case \Content\ExcelPackageNewFile.xlsx and then send that file to the user. This is an extra step and adds an additional burden to the server. I didn’t implement it and so I let this as an exercise to you.
Well, the spreadsheet included in this simple project has only formulas but you can for sure have an Excel template with lots of formulas, pivot tables, macros, etc. This gives you the power of Excel in code in a clean fashion.
Hope this helps shed some light on this topic!
Note Using the open source libraries presented in this post you won’t need Microsoft Excel installed on the server.
EPPlus is a .net library that reads and writes Excel 2007 files using the Open Office XML format (XLSX).
EPPlus supports ranges, cell styling, charts, pictures, shapes, named ranges, autofilters and a lot of other stuff.
Updated on 6/2/2010
I’m working on an ASP.NET project that uses .NET Framework 1.1. In such a case I needed to use NPOI 1.2.1 for .NET 1.1.
The code you’ll use with NPOI 1.2.1 for .NET 1.1 is the same presented on this post. Just pay attention to include this using directive inside your C# code:
using NPOI.HSSF.UserModel
Updated on 1/3/2010
If you like NPOI, help spread the word about it voting up on this ad:
Visual Studio 2008 C# ASP.NET MVC Web Application You can get the Microsoft Visual Studio Project at:
To learn how to integrate ASP.NET charting controls with ASP.NET MVC and Google Spreadsheet Data API I created a new ASP.NET MVC project and started coding a sample application.
The following is how I got it all together:
using System.Collections.Generic;
using System.Drawing;
using System.Linq;
using System.Web.Mvc;
using System.Web.UI.DataVisualization.Charting;
using Google.GData.Client;
using Google.GData.Extensions;
using Google.GData.Spreadsheets;
namespace MvcChartGoogleSpreadsheet.Views.Home
{
/// <summary>
/// ASP.NET MVC application + ASP.NET charting controls + Google Spreadsheet Data API web-based application
/// It demonstrates the operations supported by each of these technologies.
/// It requires authentication in the form of your Google Docs & Spreadsheets username
/// and password, and retrieves data from a worksheet of your choice.
/// A chart is created with the data acquired through a CellFeed query.
/// </summary>
public partial class Index : ViewPage
{
private List<WorksheetEntry> allWorksheets = new List<WorksheetEntry>();
protected void Page_Load(object sender, System.EventArgs e)
{
// Calling the method that configures the chart.
ConfigureChart();
// Creating a Google SpreadsheetService object passing to it the name of the application.
SpreadsheetsService service = new SpreadsheetsService("MvcChartGoogleSpreadsheet");
// Google account information (login and password)
service.setUserCredentials("username@gmail.com", "password");
GetAllSpreadsheetsAndWorksheets(service);
// Using LINQ query expression to get a specific worksheet.
var entry = from wse in allWorksheets
where
wse.Title.Text == "2008 leniel.net" // This is the name of the worksheet.select wse;
// Demonstrate a CellFeed query.
CellFeed cellFeed = GetWorksheetCellFeed(service, entry.First());
// Each entry represents a cell within the worksheet.
foreach(CellEntry cellEntry in cellFeed.Entries)
{
// I just want to get the contents of column 2 of the worksheet.
// The value of the cell present in column 2 will be used in the X axis.
if(cellEntry.Cell.Column == 2)
{
// I get the value of column 7 (cellEntry.Cell.Column + 5) of the same row. This value will be used in the Y axis.
// I replace the colon present in the value with a dot. I do so to make the data valid for calculating values.
string yValue = ((CellEntry)cellFeed.Entries.SingleOrDefault(es => ((CellEntry)es).Cell.Row == cellEntry.Cell.Row && ((CellEntry)es).Cell.Column == cellEntry.Cell.Column + 5)).Cell.Value.Replace(":", ".");
// I then remove the extra data that isn't necessary at all in my case.
yValue = yValue.Remove(yValue.Length - 3, 3);
// I pass the X and Y values to create a Point used in the series of the chart.
chart1.Series["Hours of work"].Points.AddXY(cellEntry.Cell.Value, yValue);
}
}
}
private void ConfigureChart()
{
chart1.Series.Add("Hours of work");
chart1.Titles.Add("My chart title");
// Add header separator of type line
chart1.Legends["Default"].HeaderSeparator = LegendSeparatorStyle.Line;
chart1.Legends["Default"].HeaderSeparatorColor = Color.Gray;
// Add Color column
LegendCellColumn firstColumn = new LegendCellColumn();
firstColumn.ColumnType = LegendCellColumnType.SeriesSymbol;
firstColumn.HeaderText = "Color";
firstColumn.HeaderBackColor = Color.WhiteSmoke;
chart1.Legends["Default"].CellColumns.Add(firstColumn);
// Add Legend Text column
LegendCellColumn secondColumn = new LegendCellColumn();
secondColumn.ColumnType = LegendCellColumnType.Text;
secondColumn.HeaderText = "Name";
secondColumn.Text = "#LEGENDTEXT";
secondColumn.HeaderBackColor = Color.WhiteSmoke;
chart1.Legends["Default"].CellColumns.Add(secondColumn);
// Add AVG cell column
LegendCellColumn avgColumn = new LegendCellColumn();
avgColumn.Text = "#AVG{N2}";
avgColumn.HeaderText = "Avg";
avgColumn.Name = "AvgColumn";
avgColumn.HeaderBackColor = Color.WhiteSmoke;
chart1.Legends["Default"].CellColumns.Add(avgColumn);
// Add Total cell column
LegendCellColumn totalColumn = new LegendCellColumn();
totalColumn.Text = "#TOTAL{N1}";
totalColumn.HeaderText = "Total";
totalColumn.Name = "TotalColumn";
totalColumn.HeaderBackColor = Color.WhiteSmoke;
chart1.Legends["Default"].CellColumns.Add(totalColumn);
// Set Min cell column attributes
LegendCellColumn minColumn = new LegendCellColumn();
minColumn.Text = "#MIN{N1}";
minColumn.HeaderText = "Min";
minColumn.Name = "MinColumn";
minColumn.HeaderBackColor = Color.WhiteSmoke;
chart1.Legends["Default"].CellColumns.Add(minColumn);
// Show labels at every 2 days
chart1.ChartAreas["ChartArea1"].AxisX.LabelStyle.Interval = 2;
chart1.ChartAreas["ChartArea1"].AxisX.MajorGrid.Interval = 2;
chart1.ChartAreas["ChartArea1"].AxisX.MajorTickMark.Interval = 2;
// Set series tooltips
chart1.Series["Hours of work"].ToolTip = "#VALX";// Set the width of the chart
chart1.Width = 510;
// Set legend docking
chart1.Legends["Default"].Docking = Docking.Bottom;
// Set legend alignment
chart1.Legends["Default"].Alignment = StringAlignment.Center;
}
/// <summary>
/// Gets a list of all the user's spreadsheets, and the
/// list of worksheets that each spreadsheet contains.
/// </summary>
/// <param name="service">An authenticated SpreadsheetsService object</param>
private void GetAllSpreadsheetsAndWorksheets(SpreadsheetsService service)
{
SpreadsheetQuery query = new SpreadsheetQuery();
SpreadsheetFeed feed = service.Query(query);
foreach(SpreadsheetEntry entry in feed.Entries)
{
GetAllWorksheets(service, entry);
}
}
/// <summary>
/// Gets a list of all worksheets in the specified spreadsheet.
/// </summary>
/// <param name="service">An authenticated SpreadsheetsService object</param>
/// <param name="entry">The spreadsheet whose worksheets are to be retrieved</param>
private void GetAllWorksheets(SpreadsheetsService service, SpreadsheetEntry entry)
{
AtomLink link = entry.Links.FindService(GDataSpreadsheetsNameTable.WorksheetRel, null);
WorksheetQuery query = new WorksheetQuery(link.HRef.ToString());
WorksheetFeed feed = service.Query(query);
foreach(WorksheetEntry worksheet in feed.Entries)
{
allWorksheets.Add(worksheet);
}
}
/// <summary>
/// Performs a cell range query on the specified worksheet to
/// retrieve only the cells contained within the specified range.
/// </summary>
/// <param name="service">An authenticated SpreadsheetsService object</param>
/// <param name="entry">The worksheet to retrieve</param>
private static CellFeed GetWorksheetCellFeed(SpreadsheetsService service, WorksheetEntry entry)
{
AtomLink listFeedLink = entry.Links.FindService(GDataSpreadsheetsNameTable.CellRel, null);
CellQuery query = new CellQuery(listFeedLink.HRef.ToString());
// Defining the range of cells that I want to be retrieved.
query.Range = "B5:G29";
CellFeed feed = service.Query(query);
return feed;
}
}
}
The above code is commented so I don’t think it needs more words.
ASP.NET MVC
I’ve already written about ASP.NET MVC in a post titled Hello World Web Site with ASP.NET MVC. If you don’t know what MVC means, don’t panic! It’s just an architectural and design pattern that advocates a clean separation of concerns in software engineering.
To get the ASP.NET MVC I’d recommend the Web Platform Installer that Microsoft released not so long ago.
Just download and install the Web Platform Installer and select what web development tools you want to be installed on your machine.
As you can see above I already have the ASP.NET MVC Release Candidate 1 installed on my machine. In case you don’t have it yet, select the checkbox and click the Install button. The Web Platform Installer will do the rest of the job, that is, it’ll download the packages and install it conveniently for you.
An interesting thing about the Web Platform Installer is that it’ll always have the most updated bits to be installed.
After installing the ASP.NET MVC you’re ready to create an ASP.NET MVC Web Application in Visual Studio 2008:
After reading some bits here and some bits there I went directly to the Libraries and Code page and went to the download page of google-gdata – the .NET library for the Google Data API.
I downloaded the file Google Data API Setup(1.3.1.0).msi and installed it. It has sample projects for all Google Data APIs. I was interested in the Spreadsheet API and so I got the specific code and dropped it on the ASP.NET MVC project.
As you can see in the above code the using statements include 3 references to Google Data API DLLs:
using Google.GData.Client;
using Google.GData.Extensions;
using Google.GData.Spreadsheets;
These DLLs are part of Google Data API Setup(1.3.1.0).msi installation file and are located at C:\Program files\Google\Google Data API SDK\Redist in my machine. You’ll need to add these DLLs to the References folder of the ASP.NET MVC project.
ASP.NET Charting Controls
There are charts of all types:
- Bar and Column Charts
- Line Charts
- Area Charts
- Pie and Doughnut Charts
- Point Charts
- Range Charts
- Circular Charts
- Accumulation Charts
- Data Distribution Charts
- Price Change Financial Charts
- Advanced Financial Charts
- Combinational Charts
To get the charting controls in Visual Studio 2008 I downloaded 2 files:
After installing both files, open Visual Studio 2008 and go to a webform/viewpage .ASPX page. Now in the Toolbox you’ll find the new Chart control.
Drag and drop the chart onto the page and you’re ready to go.
Look to the using statement present in the code behind page:
using System.Web.UI.DataVisualization.Charting;
This is the column chart I got when I ran the application:
LINQ
I used some constructs of LINQ in the above sample application. If you need further details about this fantastic technology: read my post LINQ - Language Integrated Query.
Note:
To work with Dates in a chart I had to go through the Microsoft Chart Controls Samples project and look at Working with Chart Data – Working with Dates – Setting Time Scale.
Summary
As you could see the ASP.NET charting control is feature rich and you can use it to show data from all sorts of sources as for example a Google spreadsheet. I just showed here the simplest of the charts.
Using the charting controls in an ASP.NET MVC application is a really straightforward task. Of course, you’d have action methods that’d generate the chart for you based on some logic you implement in such methods. The sample given here was just to exercise the integration of technologies available nowadays to build richer internet applications.
Note: Remember to change/substitute the username and password present in the file C:\MvcChartGoogleSpreadsheet\Views\Home\Index.aspx.cs so that the application can log you on to Google Spreadsheet service and get your data (spreadsheet along with the worksheets).
This post is related to one of the best coursework I've done during the computer engineering course. I'm really proud of it. It was the cornerstone in my programming career and helped me choose an area to put my efforts from that moment on. I was in the 5th term out of 10 (3rd year of the course out of 5 more exactly). The discipline was Programming Languages.
This subject is fantastic and is used extensively throughout the the computer science field.
First I'll give a short description about the Quicksort and Binary Search algorithms and then I'll present the work that I and my dear brother in faith Wellington Magalhães Leite did.
Quicksort
Quicksort is a well-known sorting algorithm developed by C. A. R. Hoare. Typically, quicksort is significantly faster in practice than other sorting algorithms, because its inner loop can be efficiently implemented on most architectures.
Binary Search
A binary search algorithm (or binary chop) is a technique for finding a particular value in a sorted list. It makes progressively better guesses, and closes in on the sought value by selecting the median element in a list, comparing its value to the target value, and determining if the selected value is greater than, less than, or equal to the target value. A guess that turns out to be too high becomes the new top of the list, and a guess that is too low becomes the new bottom of the list. Pursuing this strategy iteratively, it narrows the search by a factor of two each time, and finds the target value.
Our paper
Our paper is entitled Quicksort and Binary Search Algorithms. You can get a copy at the end of this post.
Without more ado, see its abstract bellow:
Sorting and searching algorithms are a core part of the computer science area. They are used throughout the programming work when you need to sort a set of data and when you need to search for a specific record (key) present in such set of data.
Quicksort is one of the fastest (quick) sorting algorithms and is most used in huge sets of data. It performs really well in such situations.
Binary search tree is one of the fastest searching algorithms and is applied in a sorted set of data. It reduces the search space by 2 in each iteration, hence its name (binary).
In this paper we present the intrinsic nature of each algorithm as well as a functional implementation of such algorithms in the C++ programming language.
Keywords: quicksort, binary search, sorting algorithms, searching algorithms, c++ programming language
CONTENTS
1 INTRODUCTION 6
1.1 Objective 6
1.2 Definition 6
1.2.1 Sorting algorithms 6
1.2.2 Searching algorithms 7
2 DEVELOPMENT 8
2.1 Quicksort algorithm (swapping and partitioning) 8
2.1.1 Detailed steps 8
2.2 Binary search algorithm 9
2.3 Studying the efficiency of the methods 9
2.3.1 The big O notation 9
2.3.2 Quicksort efficiency 10
2.3.2.1 The best and the worst case 10
2.3.2.2 Comparison with other sorting algorithms 11
2.3.3 Binary search efficiency 11
3 APPLICATION 12
3.1 Quicksort implementation 12
3.2 Binary search implementation 13
4 CONCLUSION 14
5 REFERENCES 15
Words of wisdom
As I mentioned, during the 5th term of the computer engineering course our teacher Marcus Vinicius Carvalho Guelpeli selected some sorting and searching algorithms to pass to the class as a coursework.
The coursework should be done in pairs and each pair should select a sorting and searching algorithm to compose a paper about it. We selected the quicksort and the binary search algorithms. The teacher advised us that these weren't the easy ones. I just thought: that's what I want. I don't want the easy ones. Why? Because if you get just the easy problems I'll never understand something that demands a more deep approach and every time a difficult task is given you'll tend to refuse it. What's the best thing to do? Just accept the challenge and go for it. Chances are you'll succeed. That's just what happened with us.
We haven't just written about the quicksort and the binary search, we implemented it and presented it to the class in a power point presentation. The teacher liked it so much that our grade was the highest possible! :-) In the end what did we feel? An amazing feeling. Something such that the work had been done and we learned a lot from it. That’s what a college and even an online computer science degree program is supposed to do. Give you the subjects and motivate you; teaching the basic so that you can dig up the more difficult aspects of the subject being taught.
So what's next? Well, I'll explain how we implemented the quicksort and the binary search algorithms.
Quicksort and Binary search algorithms implementation in C++
One of the things that I've always listened to was about code reuse. You search for something already implemented so that you just haven't to reinvent the wheel. Of course you'll complement or even adapt the available code to your situation. That was what we did. I found some code for the quick sort at MSDN and implemented the binary search one. Unfortunately the page at MSDN isn't available anymore. It's been three years since I hit that page.
I wanted a way of measuring the time elapsed so that we could compare the efficiency of both methods when they were fed with different input data sets. The input is nothing more than a text file .txt full of numbers in this case. Can be any data you want. Each test case we passed a text file with different random numbers and different quantity of numbers. For example, in a test case we passed a file named 2500.txt, that means 2500 random numbers. In another test case we passed other file named 7500.txt as so on. I think you got it. Doing so we could compare how well the algorithms were performing.
To generate the random numbers we used an Excel spreadsheet with the formula =RAND()*1000000. For each set of data we generated new numbers and copied and pasted those numbers into the text files that are the input for our program. During a coursework as this one we get to learn everywhere, even a new formula in Excel. It's really good. ;-)
Again, I searched for a timing class that I could reuse with the code and for sure I found it. I didn't use it at all but I used it to learn about how to measure time in C++. It's amazing how fast you can implement something. Much of the things you need related to programming are already implemented. You just have to search for it as is what you're doing here, I think! You searched for the subject of this post and here you are seeing something implemented. Try to learn from it and just don't copy the entire work and think that you know about it. It's wrong. Try to understand what the code is doing. Dive into the theory because it explains the inner essence.
The code that follows is well commented which is something every developer should do. You see, it was three years ago when we worked with this code. Today it's difficult to remember every step I took. The comments helped me to remember almost everything.
Bellow I present the quick sort method we borrowed from MSDN (we adapted it to fit our case). Note the use of the Partition method (explained in the accompanying paper):
// QuickSort implementation
void QuickSort(char** szArray, int nLower, int nUpper)
{
// Check for non-base case
if(nLower < nUpper)
{
// Split and sort partitions
int nSplit = Partition(szArray, nLower, nUpper);
QuickSort(szArray, nLower, nSplit - 1);
QuickSort(szArray, nSplit + 1, nUpper);
}
}
// QuickSort partition implementation
int Partition (char** szArray, int nLower, int nUpper)
{
// Pivot with first element
int nLeft = nLower + 1;
char* szPivot = szArray[nLower];
int nRight = nUpper;
// Partition array elements
char* szSwap;
while(nLeft <= nRight)
{
// Find item out of place
while(nLeft <= nRight && strcmp (szArray[nLeft], szPivot) <= 0)
nLeft = nLeft + 1;
while (nLeft <= nRight && strcmp (szArray[nRight], szPivot) > 0)
nRight = nRight - 1;
// Swap values if necessary
if(nLeft < nRight)
{
szSwap = szArray[nLeft];
szArray[nLeft] = szArray[nRight];
szArray[nRight] = szSwap;
nLeft = nLeft + 1;
nRight = nRight - 1;
}
}
// Move pivot element
szSwap = szArray[nLower];
szArray[nLower] = szArray[nRight];
szArray[nRight] = szSwap;
return nRight;
}
Now see the binary search method implementation that we did:
int BinarySearch(char** szArray, char key[], int nLower, int nUpper)
{
// Termination case
if(nLower > nUpper)
return 0;
int middle = (nLower + nUpper) / 2;
if(strcmp(szArray[middle], key) == 0)
return middle;
else
{
if(strcmp(szArray[middle], key) > 0)
// Search left
return BinarySearch(szArray, key, nLower, middle - 1);
// Search right
return BinarySearch(szArray, key, middle + 1, nUpper);
}
}
The next ones are the method prototypes and the main entry point that calls a menu. According to the user passed parameters we call the quicksort and the binary search methods:
// Function prototypes
void Menu(void);
void QuickSort(char** szArray, int nLower, int nUpper);
int Partition(char** szArray, int nLower, int nUpper);
int BinarySearch(char** szArray, char key[], int nLower, int nUpper);
// Application initialization
void main(void)
{
char op;
do
{
Menu();
printf("\n\nDo you wanna a new QuickSort? Y/N");
op = getche();
if(islower(op))
op = toupper(op);
}
while(op == 'Y');
}
void Menu(void)
{
// Clear screen
system("CLS");
// Control execution time
clock_t initial, final;
// Print startup banner
printf("\nQuickSort C++ Sample Application\n");
printf("Copyright (c)2001-2002 Microsoft Corporation. All rights reserved.\n\n");
printf("MSDN ACADEMIC ALLIANCE [http://www.msdnaa.net/]\n\n");
printf("BinarySearch C++ Sample Application\n");
printf("Copyright (c)2005 Leniel Braz de Oliveira Macaferi & Wellington Magalhaes Leite.\n");
printf("UBM COMPUTER ENGINEERING - 5TH SEMESTER [http://www.ubm.br/]\n\n");
// Describe program function
printf("This program example demonstrates the QuickSort and BinarySearch algorithms by\n");
printf("reading an input file, sorting its contents, writing them to a new file and\n");
printf("searching on them.\n\n");
// Prompt user for filenames
char szSrcFile[1024], szDestFile[1024];
printf("Source: ");
gets(szSrcFile);
printf("Output: ");
gets(szDestFile);
// Read contents of source file
const long nGrow = 8;
long nAlloc = nGrow;
long nSize = 0;
char** szContents = new char* [nAlloc];
char szSrcLine[1024];
FILE* pStream = fopen(szSrcFile, "rt");
while(fgets(szSrcLine, 1024, pStream))
{
// Trim newline character
char* pszCheck = szSrcLine;
while(*pszCheck != '\0')
{
if(*pszCheck == '\n' && *(pszCheck + 1) == '\0')
*pszCheck = '\0';
pszCheck++;
}
// Append to array
szContents[nSize] = new char [strlen(szSrcLine) + 1];
strcpy(szContents[nSize], szSrcLine);
nSize = nSize + 1;
if(nSize % nGrow == 0)
{
// Resize the array
char** szPrev = szContents;
nAlloc += nGrow;
szContents = new char* [nAlloc];
memcpy(szContents, szPrev, nSize * sizeof(char*));
delete szPrev;
}
}
fclose (pStream);
initial = clock();
// Pass to QuickSort function
QuickSort(szContents, 0, nSize - 1);
final = clock();
// Write sorted lines
pStream = fopen (szDestFile, "wt");
for(int nIndex = 0; nIndex < nSize; nIndex++)
{
// Write line to output file
fprintf (pStream, "%s\n", szContents[nIndex]);
}
fclose (pStream);
// Report program success
printf("\nThe sorted lines have been written to the output file.\n\n");
// QuickSort execution time
double duration = (double)(final - initial) / CLOCKS_PER_SEC;
printf("The QuickSort execution time was: %2.9lf s = %.0lf ms = %.0lf \xE6s\n\n", duration, duration * 1000, duration * 1000000);
char op = '\0';
do
{
printf("Do you wanna a BinarySearch to locate a specific key? Y/N");
op = getche();
if(islower(op))
op = toupper(op);
if(op == 'Y')
{
printf("\n\nType the key you want to search for: ");
char key[1024];
gets(key);
initial = clock();
if(BinarySearch(szContents, key, 0, nSize - 1))
{
final = clock();
duration = (double)(final - initial) / CLOCKS_PER_SEC;
printf("\nKey found!\n\n");
printf("The BinarySearch execution time was: %2.9lf s = %.0lf ms = %.0lf \xE6s\n\n", duration, duration * 1000, duration * 1000000);
}
else
{
final = clock();
duration = (double)(final - initial) / CLOCKS_PER_SEC;
printf("\nKey not found!\n\n");
printf("The BinarySearch execution time was: %2.9lf s = %.0lf ms = %.0lf \xE6s\n\n", duration, duration * 1000, duration * 1000000);
}
}
else
{
// Deallocate entire array
for(int nIndex = 0; nIndex < nSize; nIndex++)
// Delete current array element
delete szContents[nIndex];
delete szContents;
szContents = NULL;
}
}
while(op == 'Y');
}
Enter the name of a file that contains unsorted data;
Use the sample files included in the .ZIP package as: 1000.txt and 2500.txt;
In the command line "Source", type: 1000.txt;
In the command line "Output", type a name to the file that will be sorted. e.g.: sorted.txt;
After the sorting process, choose if you want or not to execute a Binary Search. If yes, provide a value to be searched. If not, choose if it is or not desired to execute a new Quicksort.
Postscript:
- To generate random numbers, use the file Random numbers generator.xls file;
- The file QuicksortBinarySearch.cpp contains the source code. The same can be used freely. Mention the authors.
Efficiency comparison
For the sake of comparison I've run some test cases with different input files. See the result in the table that follows:
Quicksort and Binary search performance
n
File name
File size (bytes)
Timing (milliseconds)
Quicksort
Binary search
10000
10000.txt
122.880
16
0
25000
25000.txt
200.704
78
0
50000
50000.txt
401.408
219
0
75000
75000.txt
602.112
360
0
100000
100000.txt
802.816
516
0
It's important to note that the time the quicksort takes appears to be longer but it is not. Why? Because the the program needs to read the file content and write the sorted data back to the output file so that it appears to take longer than the milliseconds shown on the above table. The timing functions just operate while the quicksort is running.
For the the binary search key I've input a value localized in the beginning of the sorted file, in the middle and in the end. There was no time changes. The binary search found the key I entered with a time less than (0 µs - microsecond). I have an AMD Athlon XP 2400 with 512 MB RAM.