Keep it LEAN, Build it AGILE! – Nepris Story Part 2

Standard
Share with your network...Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Buffer this pagePin on PinterestEmail this to someone

LEAN and AGILE! I wouldn’t dare build a product in any other way. I agree that I had my fair share of waterfall, BIG design projects with large enterprises, but that is a different world. I agree that there is some projects that require a thorough think through and analysis before you start building. But in that case, we know exactly what we are building (or do we? haven’t you read that majority of software projects fail?). Well that’s not the debate here. When you are building a product as a start up, you know very little about the needs of the user when you start out.

It’s all about solving a pain point!

Yes, I don’t think there is any merit to building a product if it is not solving at least one pain point for your user. We start there, build something, show it to the user and iterate. You will do this and collect the feedback continuously…well for ever!

We wanted to have our product ready for a pilot release by beginning of school year 2013, given the fact that teachers will be very busy first month of school year, we had another month or so before the actual pilot usage began. It was still just two of us in the company, we had identified Mark (Fry) as our first hire (wanted to wait until we raise some seed money) – We met regularly, drew a lot of napkin drawings, balsamiq mock ups, and I was in coding heaven – building our pilot product.

For the pilot release we wanted to update our home page too, not just the app. So we did that too..wanted to look somewhat professional and established even though we were few months old company. We did launched our beta/pilot product on time. Sabari was already working with many school districts in North Texas and Austin area and convinced many to try us.

I am happy to report that our first school year (2013), our pilot year was phenomenal! We build a lot more to the platform – we had aimed to do pilot with (may be) 10 schools, it soon became 10 school districts and by end of 2013-2014 school year, we had reached 400+ schools from 41 states! Among these users, teachers from 60+ schools had industry professionals present to their classroom through Nepris platform – A total of 170 live interactions!

I used to listen to these live sessions early on, and I must say I have seen some amazing sessions presented by some really knowledgeable professionals. And moreover its fun to watch how kids react, they ask great questions and I am sure they took away good learning from these sessions.

During this time we grew as a company too. We moved to Collide Center (now Collide Village) in McKinney, TX. We raised seed money that allowed us to bring Mark (Fry) to do Product and Operations, Lana (Moore) to do Business Development, Reshma, Shankari, Sandhya for Operatios and Support, Yuki for UI/UX design, Anuj & Rajib to help with development. We also partnered with High Steps (Eric Reeves) for helping us with sales strategy and investors.

As we get closer to school year 2014-15; we are working hard to get ready for it. From product development side, we are building more tools and features, re-vamping our UI/UX from the feedback we got, we are doing business development, talking to various partners, integrating with them, etc.

I am looking forward for another exciting school year!

Cheers!
Binu

Share with your network...Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Buffer this pagePin on PinterestEmail this to someone

Nepris – Story so far – Part 1

Standard
Share with your network...Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Buffer this pagePin on PinterestEmail this to someone

nepris-1It’s been sometime since I have blogged anything here. As someone who is running a start up (who also has family and kids :), time is always a premium. Not an excuse, just a fact! I want to share Nepris story so far with you. My co-founder Sabari and I, started Nepris early 2013. Sabari was working on a different idea before this, and I was always involved in conversations with her related to that venture and possibly doing something else too. But Nepris really started the day (it was actually night 11pm) when she drove to my place to talk “Let’s pick one idea from this list”.

Idea of bringing industry closer to classrooms sounded very appealing to me. We both come from a social background where education was given more importance than anything else. I believe that kids learn a lot in school, bringing the real world relevance to that learning is the natural next step in that process. At Nepris, we wanted to build a platform to give teachers ability to bring industry professionals virtually to classroom so that s/he can augment her classroom instruction with some real world application.

Once we decided that we wanted to work on Nepris and build the platform, I left my position with Alkami (I was VP of Engineering for a short time there), and started building the product. The last thing we wanted to do was do a BIG design, build the product and take it to market and then see how it fits into the need. We had identified some pain points and we wanted to solve it, we wanted to be as “LEAN” as possible when it comes to building the product.

We were lucky that we were able to convince Pam McBride, Engineering & Robotics teacher from McKinney High to try this out. She was very excited about this idea and jumped in to give it a shot with us. From the industry side, Anbu from General Motors was kind enough to agree to talk to the classroom. We had done some napkin specs and based on that I built a band aid solution were users could sign up, write up a request, it created a video conferencing schedule that could be used by both parties on the day of the session. That was our MVP.

I researched all the leading video conferencing platforms that we could potentially integrate with to deliver our service. Since WebEx is the most prominent one, I played with its APIs for sometime. It was not that bad, I could have used it, some of the issues were, it did not have an easy way to develop and test. One of our requirements was that we wanted all our sessions to be recorded. WebEx recorded, but then it did it in a webex format. We had to convert it into a more generic format if we needed to do something with it.

I read about Zoom.us in an article that showed up in LinkedIn. I found Zoom to be a perfect fit for us. They were also a start up (an established, well funded one), and they were very easy to work with. I was able to contact someone (Daniel) in Zoom and he got us in touch with Nick Hong (Head of product of Zoom). Zoom team was very helpful with our questions and we were able to integrate well with Zoom.

Our MVP was a hit! Teacher loved it! Kids who attended the session found that useful. Industry professionals who delivered (Anbu and Raj from GM) thought it was a great experience. This was a big validation for us, I threw out the MVP code base and started writing out pilot product. We secured some office space in Gravity Center (that unfortunately closed its doors couple of months after) – Sabari and I started meeting regularly and started drawing more napkin specs and balsamiq mock ups and thus began the building phase.

Tune in for Part 2 of this…

Cheers!

Share with your network...Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Buffer this pagePin on PinterestEmail this to someone

Upgrading Servicestack to 4.0 – Notes

Standard
Share with your network...Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Buffer this pagePin on PinterestEmail this to someone

Here are some of the notes from nepris.com upgrade of Servicestack to 4.0. It was not so painful, but there are some changes that needed some special attention. I hope you will find these useful as you go through the upgrade process.  I was using Ver 3.9.71 before upgrade, and there were some references to previous versions in different library projects. This effort of 4.0 version helped us to clean up all the references in all projects and bring it all into the same/latest version of Servicestack.

  1. Refere to Servicestack release notes before you begin your upgrade and while you plan your upgrade.
  2. Use a tool like Resharper (if you have access to, or download a trial version here) – It is always helpful to have the right tool while we go thru large upgrades or code re-factoring.
  3. Before you pull down the latest versions of Servicestack via Nuget, use Nuget Package Manager to remove all the older versions that you may have. To do this, right-click on the project, click Manage Nuget packages, click on installed packages and remove all the old Servicestack libraries (its easier to spot the old ones, it has old icon (green one) – :). Alternatively you could click on Tools > Library Package Manager > Package Manager console. This will bring up PS console, click to select your project; use uninstall-package <packagename> -Version<Version Number> to uninstall one by one. Refer to packages.config in your project file to see the version of individual Servicestack packages that are installed. REMOVE ALL OLD!
  4. You are now ready to upgrade your Servicestack libraries – install libraries that you need – Refer – here;
  5. If you are using Servicestack Authentication, Make sure you get Servicestack.Server – This has OrmliteAuthRepository, Also get Servicestack.Authentication.OpenId and Servicestack.Authentication.OAuth2, depending on what you use or not.
  6. using Servicestack.ServiceInterfaces.Auth is now Servicestack.Auth
  7. using Servicestack.ServiceInterface => Servicestack (Yes, lots of stuff rolled into Servicestack namespace)
  8. using Servicestack.CacheAccess… => Servicestack.Caching
  9. TranslateTo<> is now ConvertTo<> (Automapping  helper)
  10. In the OrmLite side there were many changes;  some example are shown below:

I used to have it like this…


public IDbConnectionFactory DbConnectionFactory;
        public Company GetCompanyById(int id)
        {
            return DbConnectionFactory.Run(c => c.FirstOrDefault(p => p.Id == id));
        }

Now changed like this

public IDbConnectionFactory DbConnectionFactory;
        public Company GetCompanyById(int id)
        {
            using (var db = DbConnectionFactory.Open())
            {
                return db.Single(p => p.Id == id);
            }
        }

Changes to Insert()

 db.Insert(newCompany);
 insertedId = db.LastInsertId();

change into

 db.Save(newCompany);
 insertedId = newCompany.Id;

Now Save() checks for the existence of record and inserts or updates accordingly, even though db.Insert(newCompany, selectIdentity: true); suppose to make LastInsertedId() call work, but I found that Save() worked better for me.

Issue with TimeSpan mapping to Sql Server DbType of time(7)

In previous versions, Servicestack used TIME to map this, but now changed to BIGINT – So this is a breaking change if you have a TimeSpan field in C# mapping to time(7) in Sql Server. Workaround is to create a dummy field in the table with bigint data type, map/copy all existing values there from the old column, delete old column and rename this new column to the old name – (Thanks Demis, he actually wrote a test for me to demostrate this : https://github.com/ServiceStack/ServiceStack.OrmLite/commit/af59d266248740d495617826b637aa9d3b124ede )

Some more examples:

db.Scalar("select count (*) from CompanyAdmin where CompanyId = {0}",companyId))
is now...
db.ScalarFmt("select count (*) from CompanyAdmin where CompanyId = {0}", companyId);
//Similar for SelectFmt<>() and UpdateFmt<>()

Authentication Table(s) changes:

If you were using Servicestack’s authenticaion, previously it used UserAuth and UserOAuthProvider tables. 4.0 uses the following tables:

  • UserAuth (same as the old table, but has more columns)
  • UserAuthDetails (Now the OAuth/OpenId information goes here)
  • UserAuthRoles – For user roles

From a upgrade point of view, you will need to take care of UserAuth table and have a way to migrate you existing data from UserOAuthProvider tables – In my case; I had to add the following additional fields to the table (using a script since we were working with live table) -

ALTER TABLE [dbo].[UserAuth]
	ADD  [PhoneNumber] [varchar](8000) NULL,
	[Company] [varchar](8000) NULL,
	[Address] [varchar](8000) NULL,
	[Address2] [varchar](8000) NULL,
	[City] [varchar](8000) NULL,
	[State] [varchar](8000) NULL,
	[InvalidLoginAttempts] [int]  NULL,
	[LastLoginAttempt] [datetime] NULL,
	[LockedDate] [datetime] NULL,
	[RecoveryToken] [varchar](8000) NULL
go
UPDATE [dbo.]UserAuth set invalidloginattempts = 0
go

AppHost.cs (web project) changes:

SetConfig(new EndpointHostConfig { ServiceStackHandlerFactoryPath = "api" });
change to:
SetConfig(new HostConfig { HandlerFactoryPath = "api" });
container.Resolve().InitSchema();

This will make sure that you have no missing tables for Auth. (Even if you have this, the case explained above need to be taken care, but this will create your new tables for you; )

 //IResourceManager is now IAppSettings
public AppConfig(IAppSettings appSettings)
        {
...
        }

Web.Config changes:

    <httpHandlers>
      <add path="api*" type="ServiceStack.WebHost.Endpoints.ServiceStackHttpHandlerFactory, ServiceStack" verb="*" />
    </httpHandlers>

update it to following


    <httpHandlers>
      <add path="api*" type="ServiceStack.HttpHandlerFactory, ServiceStack" verb="*" />
    </httpHandlers>
   
  <location path="api">
    <system.web>
      <customErrors mode="Off" />
      <httpHandlers>
        <add path="*" type="ServiceStack.HttpHandlerFactory, ServiceStack" verb="*" />
      </httpHandlers>
    </system.web>
    <!-- Required for IIS 7.0 -->
    <system.webServer>
      <modules runAllManagedModulesForAllRequests="true" />
      <validation validateIntegratedModeConfiguration="false" />
      <handlers>
        <add path="*" name="ServiceStack.Factory" type="ServiceStack.HttpHandlerFactory, ServiceStack" verb="*" preCondition="integratedMode" resourceType="Unspecified" allowPathInfo="true" />
      </handlers>
    </system.webServer>
  </location>

And of course since 4.0 is a licensed product you will need to add your license key in the web.config


More changes:

var ut = (authService.RequestContext).Cookies["user-type"].Value.ToLower();
is now
var ut = (authService.Request).Cookies["user-type"].Value.ToLower();
public override void OnAuthenticated(IServiceBase authService, IAuthSession session, IOAuthTokens tokens, Dictionary authInfo)
is now
        public override void OnAuthenticated(IServiceBase authService, IAuthSession session, IAuthTokens tokens, Dictionary authInfo)
                IServiceClient client = new JsonServiceClient(listenUrl);
                var authResponse = client.Send(new ServiceStack.Common.ServiceClient.Web.Auth
                                                   {
                                                       provider = CredentialsAuthProvider.Name,
                                                       UserName = request.UserName,
                                                       Password = request.Password,
                                                       RememberMe = true
                                                   });

is now

                IServiceClient client = new JsonServiceClient(listenUrl);
                var authResponse = client.Post(new Authenticate
                                                   {
                                                       provider = CredentialsAuthProvider.Name,
                                                       UserName = request.UserName,
                                                       Password = request.Password,
                                                       RememberMe = true
                                                   });

I hope you find this helpful! I will add more as I come across (or remember) more.
Cheers!

Share with your network...Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Buffer this pagePin on PinterestEmail this to someone

Deploying your Durandal 2.0 SPA to production – Some Notes

Standard
Share with your network...Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Buffer this pagePin on PinterestEmail this to someone

Durandal 2.0 has some breaking changes from its earlier version.  If you are converting from 1.x to 2.0, read this documentation first.  I haven’t converted my project that is currently in production (nepris.com) into 2.0 yet. I wanted to figure out 2.0 first before I jump into that change. So, for a different smaller application, I started with Durandal 2.0. I used Durandal Starter Kit with Visual Studio 2012 to get the base template up. While developing locally, everything ran pretty well, but ran into some issues with building it for production. Here are some notes… (BTW, I like 2.0 already, feels much more clean and stable than 1.x)

  1. Durandal 2.0 uses a new build system, it uses weyland. I followed the instructions on Durandal site (link) . It failed !! – Issue is in the weyland-config.js in the root of your project. Open this config file and delete the line it saying  ** mainConfigFile:’App/main.js’ This will get you past this error.
  2. If you are working with Visual Studio, you can automate this build – Follow this instructions  (Durandal docs are getting better day by day :)
  3. For every production deployment, you still need to bust the cache using the following:
    requirejs.config({
        paths: {
            'text': '../Scripts/text',
            'durandal': '../Scripts/durandal',
            'plugins': '../Scripts/durandal/plugins',
            'transitions': '../Scripts/durandal/transitions'
        },
        urlArgs: "bust=v0.11"
    });
    
    //In your View
    @if(HttpContext.Current.IsDebuggingEnabled) {
      &lt;script type="text/javascript" src="~/Scripts/require.js" data-main="App/main"&gt;&lt;/script&gt;
    }
    else{
       &lt;script type="text/javascript" src="~/App/main-built.js?v=0.11"&gt;&lt;/script&gt;
    }
    

Rest of the stuff worked pretty straight forward. I will be converting my 1.x application to 2.0 soon, will post once everything gets done.

Cheers!

Share with your network...Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Buffer this pagePin on PinterestEmail this to someone

Integrating DISQUS to your SPA

Standard
Share with your network...Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Buffer this pagePin on PinterestEmail this to someone

I am building a single page application (SPA) using Durandal JS (with ServiceStack powered API layer). In one of the sections I wanted to have DISQUS discussion integrated. This blog post outlines problems I faced and solution to the problem I found.
In a normal web site, integration of DISQUS to your pages is easy, you sign up for DISQUS, get the JavaScript Universal Code for web sites, add it where you want your discussion thread to show up in your page. When web page loads, DISQUS loads the required Javascript and adds it to the section of your page. By default DISQUS will use the page url as the key to the discussion and manages unique thread ids for your posts and replies within that.

Now with this model we have a problem, in a SPA, there is no “pages” its all one “single” page with routes (by some convention) managing the loading of various views. So if you simply paste DISQUS code where you want the discussion to show up in your SPA view html page, result will be disappointing! At least it was for me the first time. Again to my defense, it was late in the night and I was not thinking clearly. When the page ran, it did not load anything…blank! Problem is that if you add a JS snippet to your view code, its not going to work in a SPA architecture.

Solution

We need to load the required JavaScript file just like you load any other piece of code that is required by your application. Here  is how you do it…

     function initDisqus() {
	window.disqus_shortname = 'your-app-name';
	var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true;
		dsq.src = '//' + window.disqus_shortname + '.disqus.com/embed.js';
		$.getScript(dsq.src)
			.done(function () {
		          //
			});

	}

This works, well mostly…If you run your page now, you will see your discussion initialized on the page. And you can start adding comments, I was happy to get to this point, and soon realized that all the comments were getting added to the root of the site, and if you reload the page, DISQUS failed to attach the comments back to this discussion. All those comments were kinda orphaned. Soon I realized that the presence of ‘#’ (hash) in my url is also messing up with how DISQUS works!! If you have a URL like http://www.mysite.com/#/blog/my-first-blog, DISQUS does not treat anything after the ‘#” as part of your Url and attaches the thread id after the ‘#’.

So what do you do? One way (at least what I think) is to change ‘#’ to ‘#!’, for example, the URL given before will change to http://www.mysite.com/#!/blog/my-first-blog. Durandal did not complain and loads the page, and DISQUS seems to like it too. One another thing you will need to do is to reload DISQUS every time your new view loads with new unique identifier for the page and the link. This way all the discussions that happens will be tracked under your unique view (urls).

More complete solution:

     function initDisqus() {
	window.disqus_shortname = 'your-app-name';
	var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true;
	dsq.src = '//' + window.disqus_shortname + '.disqus.com/embed.js';
	$.getScript(dsq.src)
			.done(function () {
				DISQUS.reset({
					reload: true,
					config: function () {
						this.page.identifier = window.location.href;
						this.page.url = window.location.href;
					}
				});
			});

	}

Note: Make sure to call this function from your “viewAttached” within your view model code.

That’s all for now folks, Happy programming!

Cheers!
Binu

Share with your network...Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Buffer this pagePin on PinterestEmail this to someone

Deploying Durandal SPA to Production

Standard
Share with your network...Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Buffer this pagePin on PinterestEmail this to someone

***EDIT – This post talks about Durandal 1.x ***
I have been building a single page application (SPA) using Durandal JS. In this post, I want to talk about some of the steps that you want to take when you are deploying your single page application to your production/live web server. In my case I am using ASP.Net MVC 4 and I use Windows Azure to host my website. If you want to learn more about how to build a SPA using Durandal, I strongly suggest you watch John Papa‘s SPA Jump Start course in Pluralsight.

Typically your SPA application will have a structure like this:

In this all the files except the main-built.js is your application. This includes Durandal, Require JS, your files, etc. “main-built.js” is the optimized single file that contains your javascript and html files that you need to deploy to your production web server. So you do you package this for production? This is simple than I thought. I was thinking of using ASP.Net Bundle capability to bundle all the JS files, but decided against it because I felt to manage my SPA files using ASP.Net bundling could become very tricky very fast because of paths, and html files. So I went with the optimizer that is included with Durandal.

Once you install durandal using Nuget ( or HotTowel Nuget), you will see under App\durandal\amd\ you have a file called optimizer.exe. This is the tool we will use to package our SPA application.

  1. Step1: To use this you will need to install Node.js, So get node.js and install. You can download Node.js from here.
  2. Click on Start > All Programs > Node.js > Node js command prompt
  3. Navigate to your projects \app\durandal\amd folder
  4. Run optimizer.exe
  5. This will scan everything under your \App folder and creates the combined and optimized file main-built.js

Now we can use this file in our application, following code snippet shows how to include this in your index.chtml file. We will use the full files when we are in debug mode and when we are running a release configuration, use the optimized file.

@Scripts.Render("~/scripts/vendor")
@if(HttpContext.Current.IsDebuggingEnabled) {
     <script src="~/App/durandal/amd/require.js" data-main="App/main"></script>
}
else
{
    <script type="text/javascript" src="~/App/main-built.js"></script>
}

As you can see, I am using ASP.Net Bundles to combine and compress all the standard JS files like bootstrap, jquery plugins, or any other components that I use and are not part of my SPA code and use the optimizer created file for my SPA code. With this approach, our application code (SPA code) is send to browser as a single file enabling faster download. If you want to bust cache when each time you are deploying new code to your server, you can configure Require.js to include a query string parameter to each resource it fetches.

require.config({
	paths: { "text": "durandal/amd/text" },
	urlArgs: "bust=" + (new Date()).getTime()
});

Reference: http://requirejs.org/docs/api.html#config.

Please note that the code snippet above is ideal only for development, for your release code, change this to something like :

urlArgs: "bust=v1.1"

This will make sure that your users will not be downloading any resources that browser already has unless it is changed (which typically happens when you have a new release)

That’s all for now folks! Happy Programming!
Cheers!
Binu

Share with your network...Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Buffer this pagePin on PinterestEmail this to someone

LinkedIn Provider for ServiceStack Authentication

Standard
Share with your network...Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Buffer this pagePin on PinterestEmail this to someone

I love ServiceStack ! Its just pure awesomeness!  It includes a very robust Authentication and Authorization feature that is very easily extensible. Read all about it here => https://github.com/ServiceStack/ServiceStack/wiki/Authentication-and-authorization.

…But it did not support LinkedIn provider, so I spend some time this afternoon to write one, here is the code. (I told you, its easy, even I can do this :)

    public class LinkedinAuthProvider : OAuthProvider 
    {
        public const string Name = "linkedin";
        public static string Realm = "https://www.linkedin.com/uas/oauth2/";

        public static string PeopleDataUrl =
            "https://api.linkedin.com/v1/people/~:(id,first-name,last-name,formatted-name,industry,email-address)?format=json&";

        public LinkedinAuthProvider(IResourceManager appSettings): base(appSettings, Realm, Name) {}

        public override object Authenticate(IServiceBase authService, IAuthSession session, ServiceStack.ServiceInterface.Auth.Auth request)
        {
            var tokens = Init(authService, ref session, request);
            var code = authService.RequestContext.Get<IHttpRequest>().QueryString["code"];
            var error = authService.RequestContext.Get<IHttpRequest>().QueryString["error"];
            var isPreAuthError = !error.IsNullOrEmpty();
            if (isPreAuthError)
            {
                return authService.Redirect(session.ReferrerUrl);
            }
            var isPreAuthCallback = !code.IsNullOrEmpty();
            if (!isPreAuthCallback)
            {
                var preAuthUrl = Realm + "authorization?response_type=code&client_id={0}&scope={1}&state={2}&redirect_uri={3}";
                preAuthUrl = preAuthUrl.Fmt(ConsumerKey, "r_fullprofile%20r_emailaddress", Guid.NewGuid().ToString(), this.CallbackUrl.UrlEncode());
      
                authService.SaveSession(session, SessionExpiry);
                return authService.Redirect(preAuthUrl);
            }

            var accessTokenUrl = Realm +
                                 "accessToken?grant_type=authorization_code&code={0}&redirect_uri={1}&client_id={2}&client_secret={3}";
            accessTokenUrl = accessTokenUrl.Fmt(code, this.CallbackUrl.UrlEncode(), this.ConsumerKey,
                                                this.ConsumerSecret);

            try
            {
                var contents = accessTokenUrl.GetStringFromUrl();
                var authInfo = JsonObject.Parse(contents);
                tokens.AccessTokenSecret = authInfo["access_token"];
                session.IsAuthenticated = true;
                authService.SaveSession(session, SessionExpiry);
                OnAuthenticated(authService, session, tokens, authInfo.ToDictionary());

                
                return authService.Redirect(session.ReferrerUrl.AddHashParam("s", "1"));
            }
            catch (WebException we)
            {
                var statusCode = ((HttpWebResponse)we.Response).StatusCode;
                if (statusCode == HttpStatusCode.BadRequest)
                {
                    return authService.Redirect(session.ReferrerUrl.AddHashParam("f", "AccessTokenFailed"));
                }
            }
            return null;
        }
        protected override void LoadUserAuthInfo(AuthUserSession userSession, IOAuthTokens tokens, Dictionary<string, string> authInfo)
        {
            var url = PeopleDataUrl + "oauth2_access_token={0}";
            url = url.Fmt(authInfo["access_token"]);
            var json = url.GetStringFromUrl();

            var obj = JsonObject.Parse(json);
            tokens.UserId = obj.Get("id");
            tokens.UserName = obj.Get("emailAddress");
            tokens.DisplayName = obj.Get("formattedName");
            tokens.FirstName = obj.Get("firstName");
            tokens.LastName = obj.Get("lastName");
            tokens.Email = obj.Get("emailAddress");
            
            LoadUserOAuthProvider(userSession, tokens);
        }

        public override void LoadUserOAuthProvider(IAuthSession authSession, IOAuthTokens tokens)
        {
            var userSession = authSession as AuthUserSession;
            if (userSession == null) return;

            userSession.DisplayName = tokens.DisplayName ?? userSession.DisplayName;
            userSession.FirstName = tokens.FirstName ?? userSession.FirstName;
            userSession.LastName = tokens.LastName ?? userSession.LastName;
            userSession.PrimaryEmail = tokens.Email ?? userSession.PrimaryEmail ?? userSession.Email;
        }

Happy programming !

Cheers!

Binu

Share with your network...Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Buffer this pagePin on PinterestEmail this to someone

Bundle your resources to speed up your ASP.Net website

Standard
Share with your network...Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Buffer this pagePin on PinterestEmail this to someone

I am not going to write a big blog post on this here, because there is already quite a bit of good articles out there on this topic. Why I am writing is this to share a very good article on this topic for anyone who wants to learn about Bundling and Minnification features that are available with ASP.Net 4.5.

It is written by Rick Anderson and you can read full article from here.

Share with your network...Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Buffer this pagePin on PinterestEmail this to someone

How to call ServiceStack API end point from JavaScript

Standard
Share with your network...Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Buffer this pagePin on PinterestEmail this to someone

Why this is important? You might ask? Yes, it is straight forward and we all have done this several times. But why I thought of writing about this is because I wasted couple of hours because of a silly oversight. In the title I said API based on ServiceStack, this is nothing different from any other RESTful endpoint, ex: WebAPI.

So in my service takes an input like this:

[Route("/usermeta")]
public class UserMeta
    {
        public string Email { get; set; }
        public List<string> Expertise { get; set; }
    }

and service implementation goes something like this:

[Authenticate]
        public object Post(UserMeta request)
        {
        
            //I do something here
            
        }

Now to call this api, this is what we do in JavaScript world,

                    var request = { };
                   
                    request.Email = this.profileEmail;
                    request.Expertise = this.expertAreas(); //ERROR!!!
                   
                    $.ajax({
                        type: "post",
                        url: 'api/usermeta',
                        dataType: 'json',
                        data:request,
                        success: function (data) {
                            toastr.success("Everything is saved, you are all set for now!", 'Wohoo!');
                        },
                        error: function (data) {
                            toastr.error("Oh! snap...something went wrong!", 'Error!');
                        }
                    });

Now this is straightforward, isnt’it? I thought so too. But when I ran this code, I got the email address correctly in server side, but field “Expertise” was null. Issue was that I was passing the object literal and the request was expecting json.

Every thing started working the moment, I converted “this.expertAreas()” to json using JSON.stringify() (in json2.js). Simple stupid thing, but I lost my 2 hours!!!
(Thanks to my wife for pointing out the error :)

Corrected code is given below, you can stringify the whole object literal too,

                    var request = { };
                   
                    request.Email = this.profileEmail;
                    request.Expertise = JSON.stringify(this.expertAreas());
                   
                    $.ajax({
                        type: "post",
                        url: 'api/usermeta',
                        dataType: 'json',
                        data:request,
                        success: function (data) {
                            toastr.success("Everything is saved, you are all set for now!", 'Wohoo!');
                        },
                        error: function (data) {
                            toastr.error("Oh! snap...something went wrong!", 'Error!');
                        }
                    });

Cheers!

Share with your network...Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Buffer this pagePin on PinterestEmail this to someone

Transform your templated web.config files with NAnt

Standard
Share with your network...Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Buffer this pagePin on PinterestEmail this to someone

There are many steps one need to do to setup a good working development environment. In large products where there are many project files and solutions that needed to be build in a certain order, managing all the config files can be a nightmare. This post is not going to tell you how to do things in such a large scale, but I want to show how easy is to use NAnt tool to do web config transformations. This will become one of the steps that we need to do as part of a continuous integration (CI) build or a local build. I have seen development teams struggle with setting up local development environment. This becomes a pain when project builds are not well thought, or designed well. My advise to any development team or developer is to spend some time to think thru our development process and build and deployment process. Treat this as a technical debt and deal with it aggressively before it reaches a point where you are spending hours every day trying to build your solution(s) so that you can write code.

Enough of the story, let us get to the point.

Our aim is to transform a template config file. Let us name config files in the format web.config.TEMPLATE. Idea is to use NAnt to do a transformation while copying this file to the target name of web.config. In the template file, you will need to tokenize the parts that you want to change depending on the environment. For example, when you are developing local, you may want to point to your local database server, in a QA environment, this will be a QA database server. You got the idea.

NAnt works with a project file for the build. There are several tasks NAnt can do, one of these tasks is a copy task. All NAnt related files are XML files. You can run NAnt from the command prompt and it takes many command line parameters.

nant.exe -buildfile:config.build -D:sourcefile=.\source\web.config.TEMPLATE -D:propertyfile=.\build\local.properties -D:destinationfile=.\source\web.config

Command line switch -buildfile is the name of the build file; -D allows us to pass multiple property name-values.

For our little experiment, I want to build web.config files for my two scenarios, one is when I work on local machine, I need a local version of web.config created and when I move to my QA environment, I need a different web.config file. The following picture shows how files and folders are setup:

What I have in \build folder – property files for various environments

what I have in \source folder – we.config.TEMPLATE file (actual config file is generated here itself)

This is the NAnt project file. A default.properties is loaded always. We will use this file to capture all the properties that are common across the different configurations
we are building.

<project default="buildconfig">  
	<property name="propertyfile" value="invalid.file" overwrite="false" />  

	<if test="${file::exists('default.properties')}">
		<echo message="Loading default.properties" />
		<include buildfile="default.properties" />
	</if>
	<echo message="Loading ${propertyfile}" />
	
	<include buildfile="${propertyfile}" failonerror="false" unless="${string::contains(propertyfile, 'invalid.file')}" />  
	
	<target name="buildconfig">  
		<copy file="${sourcefile}" tofile="${destinationfile}" overwrite="true">  
		  <filterchain>  
			<expandproperties />  
		  </filterchain>  
		</copy>  

	</target>  
</project> 

So for example we have the following we.config.TEMPLATE file that needs to get build for local and QA environments,

<?xml version="1.0"?>
<configuration>
  <connectionStrings>
    <add name="DatabaseConnection"  connectionString="Server=${DBServerName};Database=${DBName};Trusted_Connection=${TrustedConnectionValue}" />
  </connectionStrings>
</configuration>

So we have tokens for database server, database name and trusted connection value. For example, if our database name and the trusted connection value is not changing from environment to environment, then you keep these property values in default.properties file. Since this file is loaded always (see build file), NAnt will be able to replace those tokens. Furthermore we will pass the environment specific property file along with the command line that is used to run NAnt.

Our default.properties file:

<project>  
   <property name="DBName" value="FancyApp" />  
   <property name="TrustedConnectionValue" value="true" />  
</project>  

and our local.properties file; where we specify the database server name as (local). When we are running for another environment like QA, a qa.properties file can be created and passed in as parameter.

<project>  
   <property name="DBServerName" value="(local)" />  
</project>  

So, now with these files all set in the right places, if we run the commandline for NAnt, NAnt will run the task with a FilerChain to expand the properties using the files and/or property values supplied. Resulting file is the config file that you would want to use in the specified environment.

I am planning to expand this post in coming weeks, to show how we can use MSBuild to build the project,, deploy it to the specified location, how we can automatically checkout files from SVN, run some Unit/INtegration tests etc.

Once we have a full local build worked out, I will expand it further to use a CI Build server like Team City to kick off automated builds.

Exciting…

Until then, be productive :)

Share with your network...Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Buffer this pagePin on PinterestEmail this to someone