Virat Kohli promises to ‘come back stronger’ next season after nightmarish IPL 2017 campaign

first_imgVirat Kohli’s Royal Challengers Bangalore endured a disastrous campaign in the Indian Premier League this season. This came after RCB finished as runners-up in the 2016 edition of the cash-rich T20 league.Royal Challengers Bangalore were jolted before the tournament even got underway. Two of their key batsmen KL Rahul and Sarfaraz Khan were ruled out of the entire season with injuries. Injured captain Kohli and AB de Villiers were also sidelined for the first few games.Bangalore looked like an unsettled unit throughout the tournament, winning just three matches in 14 games. RCB finished with only seven points, their lowest in IPL history.Kohli, however, battled like a lone warrior top scoring with 308 runs with the help of four half-centuries. The batting heavy RCB failed to fire in unison with the poor form of Chris Gayle and De Villiers’ inconsistency hurting the team most.Kohli, who will most likely be retained by RCB in the 11th season, has promised to bounce back stronger next year.”Really humbled by all the love and support that we got this season. We will come back stronger next season,” Kohli tweeted after RCB finished at the bottom of the points table.Really humbled by all the love and support that we got this season. We will come back stronger next season. ?????? pic.twitter.com/Mi2QlNSNVH- Virat Kohli (@imVkohli) 17 May 2017Kohli’s had struck a record 973 runs and hammered a record four centuries. (Read: The Royal failure)The 28-year-old batsman’s next assignment starts from June 4 when he returns on national duty to lead India in their title defence of the ICC Champions Trophy. India take on arch-rivals Pakistan at Edgbaston on Super Sunday.advertisementlast_img read more

UPDATE Houston Harris Grant 75 Million to 28 NonProfits Helping Harvey Victims

first_imgAl Ortiz | Houston Public MediaHouston Mayor Sylvester Turner and Harris County Judge Ed Emmett announce the recipients of the joint Harvey Relief Fund on October 3rd, 2017.Houston Mayor Sylvester Turner and Harris County Judge Ed Emmett announced Tuesday the first 28 recipients of grants funded through the joint Fund the City and the County created to help Harvey victims.Tony Chase, co-chair of the Fund, says that, so far, they have raised approximately $79 million.The first round of grants amounts to $7.5 million and the chosen non-profits will use the money to provide services such as temporary housing, home repairs and rental assistance, among other things.Chase also says the goal is to have all the monies of the Fund distributed in the next nine to 12 months.The non-profits that received the funds include household names, such as Catholic Charities of the Archdiocese of Galveston-Houston and the Salvation Army, but also groups that work in certain parts of greater Houston, like the Katy Christian Ministries and the Fifth Ward Community Redevelopment Corporation.Mayor Turner emphasized during a press conference to announce the grants that the help isn’t just for people with low income levels.“You may be middle income, OK? But all of your stuff is on the curb and you’ve exhausted your savings and your bank account. Well, you need help too,” Turner noted.The grant contracts specify the selected non-profits must use the funds for programs benefiting Harvey victims within the next 90 days. Listen 00:00 /00:59 X To embed this piece of audio in your site, please use this code: Share last_img read more

Get handson Emma Watson looks from Beauty and the Beast

first_imgHollywood actress Emma Watson is not only championing gender equality as the UN Women Goodwill Ambassador, but is also impressing Indian fashionistas with her Disney princess look in musical romantic fantasy film Beauty and the Beast. We have rounded up her statement beauty and hair looks respectively that have left many inspired!For make-upUse a light-weight foundation blended well that will give you a healthy and moisturised look paired with a concealer under your eyes. Also Read – Add new books to your shelfBrush out your brows and fill them in softly with a brow pencil.To achieve the modern graphic eye liner, line your eyes with a soft brown gel liner and flick it out like a cat eye, except leave the flick unfilled.Complete the look with a coral lipstick which will balance out the eyeliner.For hairMake sure you have some tools handy when you decide to do this on yourself; a dryer, a medium sized tongs, bobby pins, u-pins and a back-combing brush. It’s always better if your hair is a day old and not washed the same day. Also Read – Over 2 hours screen time daily will make your kids impulsiveMake sure your hair has enough texture to work with; take large sections and spray every section with a heat protected product that will make your curls last more longer and define them. Use the medium tongs on every section and curl it away from the face giving it a looser texture to work with.Once you’re done with the style, open it using some smoothing cream to loosen the curls and add more shine and softness.Tie a ponytail at your occipital bone and fix the hair around the ponytail to create more texture and definition to the style. Take smaller pieces of the sections and dress them around the ponytail for better grip and a much fuller look making sure all the hair in the ponytail is incorporated. Once the back is complete, take smaller sections from the front and dress it away from the face. You can twist and open the twist slightly teasing it with your fingers and then fix it on the ponytail making sure the whole look comes together.The front section has to be raised while blow drying to make sure that extra volume is created on the roots. Once that’s done, hold the hair in the same position and spray it to create a wave and fix it in the same position.Fix the front section towards the side or towards the ponytail depending on how long or short the hair is.At the end spritz a shine or setting spray to lock in the moisture and the look.last_img read more

Generative Adversarial Networks Generate images using Keras GAN Tutorial

first_imgYou might have worked with the popular MNIST dataset before – but in this article, we will be generating new MNIST-like images with a Keras GAN. It can take a very long time to train a GAN; however, this problem is small enough to run on most laptops in a few hours, which makes it a great example. The following excerpt is taken from the book Deep Learning Quick Reference, authored by Mike Bernico. The network architecture that we will be using here has been found by, and optimized by, many folks, including the authors of the DCGAN paper and people like Erik Linder-Norén, who’s excellent collection of GAN implementations called Keras GAN served as the basis of the code we used here. Loading the MNIST dataset The MNIST dataset consists of 60,000 hand-drawn numbers, 0 to 9. Keras provides us with a built-in loader that splits it into 50,000 training images and 10,000 test images. We will use the following code to load the dataset: from keras.datasets import mnistdef load_data(): (X_train, _), (_, _) = mnist.load_data() X_train = (X_train.astype(np.float32) – 127.5) / 127.5 X_train = np.expand_dims(X_train, axis=3) return X_train As you probably noticed, We’re not returning any of the labels or the testing dataset. We’re only going to use the training dataset. The labels aren’t needed because the only labels we will be using are 0 for fake and 1 for real. These are real images, so they will all be assigned a label of 1 at the discriminator. Building the generator The generator uses a few new layers that we will talk about in this section. First, take a chance to skim through the following code: def build_generator(noise_shape=(100,)): input = Input(noise_shape) x = Dense(128 * 7 * 7, activation=”relu”)(input) x = Reshape((7, 7, 128))(x) x = BatchNormalization(momentum=0.8)(x) x = UpSampling2D()(x) x = Conv2D(128, kernel_size=3, padding=”same”)(x) x = Activation(“relu”)(x) x = BatchNormalization(momentum=0.8)(x) x = UpSampling2D()(x) x = Conv2D(64, kernel_size=3, padding=”same”)(x) x = Activation(“relu”)(x) x = BatchNormalization(momentum=0.8)(x) x = Conv2D(1, kernel_size=3, padding=”same”)(x) out = Activation(“tanh”)(x) model = Model(input, out) print(“– Generator — “) model.summary() return model We have not previously used the UpSampling2D layer. This layer will take increases in the rows and columns of the input tensor, leaving the channels unchanged. It does this by repeating the values in the input tensor. By default, it will double the input. If we give an UpSampling2D layer a 7 x 7 x 128 input, it will give us a 14 x 14 x 128 output. Typically when we build a CNN, we start with an image that is very tall and wide and uses convolutional layers to get a tensor that’s very deep but less tall and wide. Here we will do the opposite. We’ll use a dense layer and a reshape to start with a 7 x 7 x 128 tensor and then, after doubling it twice, we’ll be left with a 28 x 28 tensor. Since we need a grayscale image, we can use a convolutional layer with a single unit to get a 28 x 28 x 1 output. This sort of generator arithmetic is a little off-putting and can seem awkward at first but after a few painful hours, you will get the hang of it! Building the discriminator The discriminator is really, for the most part, the same as any other CNN. Of course, there are a few new things that we should talk about. We will use the following code to build the discriminator: def build_discriminator(img_shape): input = Input(img_shape) x =Conv2D(32, kernel_size=3, strides=2, padding=”same”)(input) x = LeakyReLU(alpha=0.2)(x) x = Dropout(0.25)(x) x = Conv2D(64, kernel_size=3, strides=2, padding=”same”)(x) x = ZeroPadding2D(padding=((0, 1), (0, 1)))(x) x = (LeakyReLU(alpha=0.2))(x) x = Dropout(0.25)(x) x = BatchNormalization(momentum=0.8)(x) x = Conv2D(128, kernel_size=3, strides=2, padding=”same”)(x) x = LeakyReLU(alpha=0.2)(x) x = Dropout(0.25)(x) x = BatchNormalization(momentum=0.8)(x) x = Conv2D(256, kernel_size=3, strides=1, padding=”same”)(x) x = LeakyReLU(alpha=0.2)(x) x = Dropout(0.25)(x) x = Flatten()(x) out = Dense(1, activation=’sigmoid’)(x) model = Model(input, out)print(“– Discriminator — “)model.summary()return model First, you might notice the oddly shaped zeroPadding2D() layer. After the second convolution, our tensor has gone from 28 x 28 x 3 to 7 x 7 x 64. This layer just gets us back into an even number, adding zeros on one side of both the rows and columns so that our tensor is now 8 x 8 x 64. More unusual is the use of both batch normalization and dropout. Typically, these two layers are not used together; however, in the case of GANs, they do seem to benefit the network. Building the stacked model Now that we’ve assembled both the generator and the discriminator, we need to assemble a third model that is the stack of both models together that we can use for training the generator given the discriminator loss. To do that we can just create a new model, this time using the previous models as layers in the new model, as shown in the following code: discriminator = build_discriminator(img_shape=(28, 28, 1))generator = build_generator()z = Input(shape=(100,))img = generator(z)discriminator.trainable = Falsereal = discriminator(img)combined = Model(z, real) Notice that we’re setting the discriminator’s training attribute to False before building the model. This means that for this model we will not be updating the weights of the discriminator during backpropagation. We will freeze these weights and only move the generator weights with the stack. The discriminator will be trained separately. Now that all the models are built, they need to be compiled, as shown in the following code: gen_optimizer = Adam(lr=0.0002, beta_1=0.5)disc_optimizer = Adam(lr=0.0002, beta_1=0.5)discriminator.compile(loss=’binary_crossentropy’,optimizer=disc_optimizer,metrics=[‘accuracy’])generator.compile(loss=’binary_crossentropy’, optimizer=gen_optimizer)combined.compile(loss=’binary_crossentropy’, optimizer=gen_optimizer) If you’ll notice, we’re creating two custom Adam optimizers. This is because many times we will want to change the learning rate for only the discriminator or generator, slowing one or the other down so that we end up with a stable GAN where neither is overpowering the other. You’ll also notice that we’re using beta_1 = 0.5. This is a recommendation from the original DCGAN paper that we’ve carried forward and also had success with. A learning rate of 0.0002 is a good place to start as well, and was found in the original DCGAN paper. The training loop We have previously had the luxury of calling .fit() on our model and letting Keras handle the painful process of breaking the data apart into mini batches and training for us. Unfortunately, because we need to perform the separate updates for the discriminator and the stacked model together for a single batch we’re going to have to do things the old-fashioned way, with a few loops. This is how things used to be done all the time, so while it’s perhaps a little more work, it does admittedly leave me feeling nostalgic. The following code illustrates the training technique: num_examples = X_train.shape[0]num_batches = int(num_examples / float(batch_size))half_batch = int(batch_size / 2) for epoch in range(epochs + 1):for batch in range(num_batches):# noise images for the batchnoise = np.random.normal(0, 1, (half_batch, 100))fake_images = generator.predict(noise)fake_labels = np.zeros((half_batch, 1))# real images for batch idx = np.random.randint(0, X_train.shape[0], half_batch)real_images = X_train[idx]real_labels = np.ones((half_batch, 1))# Train the discriminator (real classified as ones and generated as zeros)d_loss_real = discriminator.train_on_batch(real_images, real_labels)d_loss_fake = discriminator.train_on_batch(fake_images, fake_labels) d_loss = 0.5 * np.add(d_loss_real, d_loss_fake)noise = np.random.normal(0, 1, (batch_size, 100))# Train the generatorg_loss = combined.train_on_batch(noise, np.ones((batch_size, 1)))# Plot the progressprint(“Epoch %d Batch %d/%d [D loss: %f, acc.: %.2f%%] [G loss: %f]” %(epoch,batch, num_batches, d_loss[0], 100 * d_loss[1], g_loss))if batch % 50 == 0:save_imgs(generator, epoch, batch) There is a lot going on here, to be sure. As before, let’s break it down block by block. First, let’s see the code to generate noise vectors: noise = np.random.normal(0, 1, (half_batch, 100)) fake_images = generator.predict(noise) fake_labels = np.zeros((half_batch, 1)) This code is generating a matrix of noise vectors called z) and sending it to the generator. It’s getting a set of generated images back, which we’re calling fake images. We will use these to train the discriminator, so the labels we want to use are 0s, indicating that these are in fact generated images. Note that the shape here is half_batch x 28 x 28 x 1. The half_batch is exactly what you think it is. We’re creating half a batch of generated images because the other half of the batch will be real data, which we will assemble next. To get our real images, we will generate a random set of indices across X_train and use that slice of X_train as our real images, as shown in the following code: idx = np.random.randint(0, X_train.shape[0], half_batch)real_images = X_train[idx]real_labels = np.ones((half_batch, 1)) Yes, we are sampling with replacement in this case. It does work out but it’s probably not the best way to implement minibatch training. It is, however, probably the easiest and most common. Since we are using these images to train the discriminator, and because they are real images, we will assign them 1s as labels, rather than 0s. Now that we have our discriminator training set assembled, we will update the discriminator. Also, note that we aren’t using the soft labels. That’s because we want to keep things as easy as they can be to understand. Luckily the network doesn’t require them in this case. We will use the following code to train the discriminator: # Train the discriminator (real classified as ones and generated as zeros)d_loss_real = discriminator.train_on_batch(real_images, real_labels)d_loss_fake = discriminator.train_on_batch(fake_images, fake_labels)d_loss = 0.5 * np.add(d_loss_real, d_loss_fake) Notice that here we’re using the discriminator’s train_on_batch() method. The train_on_batch() method does exactly one round of forward and backward propagation. Every time we call it, it updates the model once from the model’s previous state. Also, notice that we’re making the update for the real images and fake images separately. This is advice that is given on the GAN hack Git we had previously referenced in the Generator architecture section. Especially in the early stages of training, when real images and fake images are from radically different distributions, batch normalization will cause problems with training if we were to put both sets of data in the same update. Now that the discriminator has been updated, it’s time to update the generator. This is done indirectly by updating the combined stack, as shown in the following code: noise = np.random.normal(0, 1, (batch_size, 100))g_loss = combined.train_on_batch(noise, np.ones((batch_size, 1))) To update the combined model, we create a new noise matrix, and this time it will be as large as the entire batch. We will use that as an input to the stack, which will cause the generator to generate an image and the discriminator to evaluate that image. Finally, we will use the label of 1 because we want to backpropagate the error between a real image and the generated image. Lastly, the training loop reports the discriminator and generator loss at the epoch/batch and then, every 50 batches, of every epoch we will use save_imgs to generate example images and save them to disk, as shown in the following code: print(“Epoch %d Batch %d/%d [D loss: %f, acc.: %.2f%%] [G loss: %f]” % (epoch,batch, num_batches, d_loss[0], 100 * d_loss[1], g_loss))if batch % 50 == 0:save_imgs(generator, epoch, batch) The save_imgs function uses the generator to create images as we go, so we can see the fruits of our labor. We will use the following code to define save_imgs: def save_imgs(generator, epoch, batch): r, c = 5, 5 noise = np.random.normal(0, 1, (r * c, 100)) gen_imgs = generator.predict(noise) gen_imgs = 0.5 * gen_imgs + 0.5 fig, axs = plt.subplots(r, c) cnt = 0 for i in range(r):for j in range(c): axs[i, j].imshow(gen_imgs[cnt, :, :, 0], cmap=’gray’) axs[i, j].axis(‘off’) cnt += 1 fig.savefig(“images/mnist_%d_%d.png” % (epoch, batch)) plt.close() It uses only the generator by creating a noise matrix and retrieving an image matrix in return. Then, using matplotlib.pyplot, it saves those images to disk in a 5 x 5 grid. Performing model evaluation Good is somewhat subjective when you’re building a deep neural network to create images.  Let’s take a look at a few examples of the training process, so you can see for yourself how the GAN begins to learn to generate MNIST. Here’s the network at the very first batch of the very first epoch. Clearly, the generator doesn’t really know anything about generating MNIST at this point; it’s just noise, as shown in the following image: But just 50 batches in, something is happening, as you can see from the following image: And after 200 batches of epoch 0 we can almost see numbers, as you can see from the following image: And here’s our generator after one full epoch. These generated numbers look pretty good, and we can see how the discriminator might be fooled by them. At this point, we could probably continue to improve a little bit, but it looks like our GAN has worked as the computer is generating some pretty convincing MNIST digits, as shown in the following image: Thus, we see the power of GANs in action when it comes to image generation using the Keras library. If you found the above article to be useful, make sure you check out our book Deep Learning Quick Reference, for more such interesting coverage of popular deep learning concepts and their practical implementation. Read Next Keras 2.2.0 releases! 2 ways to customize your deep learning models with Keras How to build Deep convolutional GAN using TensorFlow and Keraslast_img read more

ClojureCUDA 060 now supports CUDA 10

first_imgClojureCUDA is a CUDA that supports parallel computations on the GPU with CUDA in the Clojure programming language. With this library, you can access high-performance Computing and GPGPU in Clojure. Installation ClojureCUDA 0.6.0 now has support for the new CUDA 10. To start using it: Install the CUDA 10 Toolkit Update your drivers Update the ClojureCUDA version in project.clj All the existing code should work without requiring any changes. CUDA and libraries CUDA is the most used environment for high-performance computing on NVIDIA GPUs. You can now use CUDA directly from the interactive Clojure REPL without having to wrangle with the C++ toolchain. High-performance libraries like Neanderthal take advantage of ClojureCUDA to deliver speed dynamically to Clojure programs. With these higher-level libraries, you can perform fast calculations with just a few lines of Clojure. You don’t even have to write the GPU code yourself. But writing the lower level GPU code is also not so difficult in an interactive Clojure environment. ClojureCUDA features The ClojureCUDA library has features like high performance and optimization for Clojure. High-performance computing CUDA enables various hardware optimizations on NVIDIA GPUs. Users can access the leading CUDA libraries for numerical computing like cuBLAS, cuFFT, and cuDNN. Optimized for Clojure ClojureCUDA is built with a focus on Clojure. The interface and functions fit into a functional style. They are also aligned to number crunching with CUDA. Reusable The library closely follows the CUDA driver API. Users translate examples from best CUDA books easily. Free and Open Source It is licensed under the Eclipse Public License (EPL) which is the same license used for Clojure. ClojureCUDA and other libraries by uncomplicate are open source. You can choose to contribute on GitHub or donate on Patreon. For more details and code examples, visit the dragan Blog. Read next Clojure 1.10.0-beta1 is out! Stable release of CUDA 10.0 out, with Turing support, tools and library changes NVTOP: An htop like monitoring tool for NVIDIA GPUs on Linuxlast_img read more

VoX appointed Canadian PR rep for St Vincent and the Grenadines

first_img Travelweek Group TORONTO — Watch for St. Vincent and the Grenadines to make some moves in Canada, as the destination has selected VoX International as its Canadian public relations and communications representative.As such, VoX will manage media relations and day-to-day operations for the Canadian market. It will also host a number of media fam trips to the islands.“By selecting VoX International as our public relations agency in Canada, we are showing our commitment to the Canadian travel market by working with an industry-leading company,” said Glen Beache, CEO, SVGTA. “We cannot wait to welcome Canadian travellers to our destination in 2018.”St. Vincent and The Grenadines is poised to welcome more Canadian this year with direct air service, the opening of its new Argyle International Airport, and a full calendar of festivals.A collection of 32 islands and cays in the Caribbean, St. Vincent and The Grenadines stretches 45 miles south from the main island of St. Vincent and includes eight inhabited islands. Surrounded by water, it’s known to have some of the best sailing waters in the world; the recently opened US$250 million Glossy Bay Marina on Canouan has set the bar for future marine development in the region.More news:  Can you guess the top Instagrammed wedding locations in the world?On land, St. Vincent and The Grenadines offers hiking along the mountainous Vermont Nature Trail, as well as a wide assortment of world-class restaurants on all nine inhabited islands. Blessed with fertile volcanic soil, the islands is known for its locally grown fruits, vegetables and spices, all of which are used to create such specialties as seafood callaloo soup, roasted breadfruit, and fried jack fish.Agents can learn more about the destination by visiting travelweeklearningcentre.com/st-vincent-and-the-grenadines. Monday, March 26, 2018 Share VoX appointed Canadian PR rep for St. Vincent and the Grenadinescenter_img Posted by Tags: St. Vincent and the Grenadines, VoX International << Previous PostNext Post >>last_img read more

Opening this December Melias newest resort in Punta Cana

first_img PALMA DE MALLORCA — Melia Hotels International is set to debut a sprawling new all-inclusive resort in Punta Cana in time for the busy winter season.Costing $110 million and featuring 288 suite-style accommodations, The Grand Reserve at Paradisus Palma Real is located 30 minutes from Punta Cana International Airport on the famed Playa de Bavaro. Upon completion, it will also boast seven restaurants and bars, a full-service spa, kids club, aqua adventure park and a wellness facility with gym equipment.Rooms will range from 800 to over 3,000 square feet, with suites including either one or two bedrooms, living and dining spaces, soaking tubs, walk-in showers, private balconies with hydro-massage whirlpool tubs and outdoor living spaces. Most noteworthy of the suites are The Grand Reserve’s Swim-Up Suites featuring one or two bedrooms, direct access to an exclusive pool, lush garden areas and a solarium.Co-existing with hotel guests will be an additional 144 suites for members of Circle by Melia for a total of 432 suites in the entire building. Launched in 2016, Circle by Melia is Melia Hotels International’s vacation membership program that includes private transfers, exclusive dining experiences, spa treatments and special add-on amenities for children and couples.More news:  Sunwing offers ultimate package deal ahead of YXU flights to SNU, PUJOther top selling points will be the Cigar Bar, coming this winter, The Grand Reserve Spa by Natura Bissé featuring 14 treatment rooms and a relaxation room, and Stay at One & Play at Three privileges, which include complete access to the extensive property offerings at both The Reserve and Paradisus Palma Real.“We’re eager and excited to welcome The Grand Reserve to the Melia Hotels International family later this year, said Alvaro Tejada, Senior Vice President for the Americas for Melia Hotels International.” With its unique design superior amenities and new technological advancements, The Grand Reserve is set to radically transform the way travelers experience our hotels in the Dominican Republic.”In addition to The Grand Reserve, Melia Hotels International is transforming its Meliá Caribe Tropical property into two distinct properties: Meliá Punta Cana Beach Resort for adults only, and Meliá Caribe Beach resort for families, set to be completed by November 2018. Share Thursday, June 21, 2018 Posted by Tags: Melia Hotels, Punta Canacenter_img Travelweek Group Opening this December: Melia’s newest resort in Punta Cana << Previous PostNext Post >>last_img read more